00:00:00.000 Started by upstream project "autotest-per-patch" build number 132810 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.043 The recommended git tool is: git 00:00:00.044 using credential 00000000-0000-0000-0000-000000000002 00:00:00.046 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.084 Fetching changes from the remote Git repository 00:00:00.086 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.122 Using shallow fetch with depth 1 00:00:00.122 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.122 > git --version # timeout=10 00:00:00.162 > git --version # 'git version 2.39.2' 00:00:00.162 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.197 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.197 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.150 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.165 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.179 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.179 > git config core.sparsecheckout # timeout=10 00:00:06.194 > git read-tree -mu HEAD # timeout=10 00:00:06.210 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.234 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.234 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.341 [Pipeline] Start of Pipeline 00:00:06.355 [Pipeline] library 00:00:06.356 Loading library shm_lib@master 00:00:06.357 Library shm_lib@master is cached. Copying from home. 00:00:06.373 [Pipeline] node 00:00:06.403 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:00:06.405 [Pipeline] { 00:00:06.414 [Pipeline] catchError 00:00:06.416 [Pipeline] { 00:00:06.427 [Pipeline] wrap 00:00:06.434 [Pipeline] { 00:00:06.441 [Pipeline] stage 00:00:06.442 [Pipeline] { (Prologue) 00:00:06.676 [Pipeline] sh 00:00:07.548 + logger -p user.info -t JENKINS-CI 00:00:07.579 [Pipeline] echo 00:00:07.580 Node: WFP8 00:00:07.587 [Pipeline] sh 00:00:07.926 [Pipeline] setCustomBuildProperty 00:00:07.937 [Pipeline] echo 00:00:07.939 Cleanup processes 00:00:07.944 [Pipeline] sh 00:00:08.236 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:08.236 90854 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:08.249 [Pipeline] sh 00:00:08.540 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:08.540 ++ grep -v 'sudo pgrep' 00:00:08.540 ++ awk '{print $1}' 00:00:08.540 + sudo kill -9 00:00:08.540 + true 00:00:08.556 [Pipeline] cleanWs 00:00:08.565 [WS-CLEANUP] Deleting project workspace... 00:00:08.566 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.578 [WS-CLEANUP] done 00:00:08.582 [Pipeline] setCustomBuildProperty 00:00:08.596 [Pipeline] sh 00:00:08.886 + sudo git config --global --replace-all safe.directory '*' 00:00:08.997 [Pipeline] httpRequest 00:00:10.755 [Pipeline] echo 00:00:10.756 Sorcerer 10.211.164.112 is alive 00:00:10.762 [Pipeline] retry 00:00:10.763 [Pipeline] { 00:00:10.771 [Pipeline] httpRequest 00:00:10.774 HttpMethod: GET 00:00:10.775 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.776 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.806 Response Code: HTTP/1.1 200 OK 00:00:10.806 Success: Status code 200 is in the accepted range: 200,404 00:00:10.806 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:35.199 [Pipeline] } 00:00:35.214 [Pipeline] // retry 00:00:35.220 [Pipeline] sh 00:00:35.507 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:35.527 [Pipeline] httpRequest 00:00:36.258 [Pipeline] echo 00:00:36.260 Sorcerer 10.211.164.112 is alive 00:00:36.268 [Pipeline] retry 00:00:36.270 [Pipeline] { 00:00:36.283 [Pipeline] httpRequest 00:00:36.288 HttpMethod: GET 00:00:36.288 URL: http://10.211.164.112/packages/spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz 00:00:36.289 Sending request to url: http://10.211.164.112/packages/spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz 00:00:36.318 Response Code: HTTP/1.1 200 OK 00:00:36.318 Success: Status code 200 is in the accepted range: 200,404 00:00:36.318 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz 00:07:03.046 [Pipeline] } 00:07:03.061 [Pipeline] // retry 00:07:03.067 [Pipeline] sh 00:07:03.357 + tar --no-same-owner -xf spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz 00:07:05.909 [Pipeline] sh 00:07:06.198 + git -C spdk log --oneline -n5 00:07:06.198 b6a18b192 nvme/rdma: Don't limit max_sge if UMR is used 00:07:06.198 1148849d6 nvme/rdma: Register UMR per IO request 00:07:06.198 0787c2b4e accel/mlx5: Support mkey registration 00:07:06.198 0ea9ac02f accel/mlx5: Create pool of UMRs 00:07:06.198 60adca7e1 lib/mlx5: API to configure UMR 00:07:06.209 [Pipeline] } 00:07:06.222 [Pipeline] // stage 00:07:06.228 [Pipeline] stage 00:07:06.229 [Pipeline] { (Prepare) 00:07:06.243 [Pipeline] writeFile 00:07:06.257 [Pipeline] sh 00:07:06.538 + logger -p user.info -t JENKINS-CI 00:07:06.550 [Pipeline] sh 00:07:06.836 + logger -p user.info -t JENKINS-CI 00:07:06.848 [Pipeline] sh 00:07:07.131 + cat autorun-spdk.conf 00:07:07.131 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:07.131 SPDK_TEST_NVMF=1 00:07:07.131 SPDK_TEST_NVME_CLI=1 00:07:07.131 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:07.131 SPDK_TEST_NVMF_NICS=e810 00:07:07.131 SPDK_TEST_VFIOUSER=1 00:07:07.131 SPDK_RUN_UBSAN=1 00:07:07.131 NET_TYPE=phy 00:07:07.138 RUN_NIGHTLY=0 00:07:07.142 [Pipeline] readFile 00:07:07.187 [Pipeline] withEnv 00:07:07.189 [Pipeline] { 00:07:07.202 [Pipeline] sh 00:07:07.490 + set -ex 00:07:07.490 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf ]] 00:07:07.490 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:07:07.490 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:07.490 ++ SPDK_TEST_NVMF=1 00:07:07.490 ++ SPDK_TEST_NVME_CLI=1 00:07:07.490 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:07.490 ++ SPDK_TEST_NVMF_NICS=e810 00:07:07.490 ++ SPDK_TEST_VFIOUSER=1 00:07:07.490 ++ SPDK_RUN_UBSAN=1 00:07:07.490 ++ NET_TYPE=phy 00:07:07.490 ++ RUN_NIGHTLY=0 00:07:07.490 + case $SPDK_TEST_NVMF_NICS in 00:07:07.490 + DRIVERS=ice 00:07:07.490 + [[ tcp == \r\d\m\a ]] 00:07:07.490 + [[ -n ice ]] 00:07:07.490 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:07:07.490 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:07:10.786 rmmod: ERROR: Module irdma is not currently loaded 00:07:10.786 rmmod: ERROR: Module i40iw is not currently loaded 00:07:10.786 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:07:10.786 + true 00:07:10.786 + for D in $DRIVERS 00:07:10.786 + sudo modprobe ice 00:07:10.786 + exit 0 00:07:10.797 [Pipeline] } 00:07:10.834 [Pipeline] // withEnv 00:07:10.866 [Pipeline] } 00:07:10.884 [Pipeline] // stage 00:07:10.895 [Pipeline] catchError 00:07:10.897 [Pipeline] { 00:07:10.909 [Pipeline] timeout 00:07:10.909 Timeout set to expire in 1 hr 0 min 00:07:10.911 [Pipeline] { 00:07:10.924 [Pipeline] stage 00:07:10.925 [Pipeline] { (Tests) 00:07:10.939 [Pipeline] sh 00:07:11.229 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:07:11.229 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:07:11.229 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:07:11.229 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 ]] 00:07:11.229 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:07:11.229 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output 00:07:11.229 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk ]] 00:07:11.229 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output ]] 00:07:11.229 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output 00:07:11.229 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output ]] 00:07:11.229 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:07:11.229 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:07:11.229 + source /etc/os-release 00:07:11.229 ++ NAME='Fedora Linux' 00:07:11.229 ++ VERSION='39 (Cloud Edition)' 00:07:11.229 ++ ID=fedora 00:07:11.229 ++ VERSION_ID=39 00:07:11.229 ++ VERSION_CODENAME= 00:07:11.229 ++ PLATFORM_ID=platform:f39 00:07:11.229 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:07:11.229 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:11.229 ++ LOGO=fedora-logo-icon 00:07:11.229 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:07:11.229 ++ HOME_URL=https://fedoraproject.org/ 00:07:11.229 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:07:11.230 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:11.230 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:11.230 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:11.230 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:07:11.230 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:11.230 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:07:11.230 ++ SUPPORT_END=2024-11-12 00:07:11.230 ++ VARIANT='Cloud Edition' 00:07:11.230 ++ VARIANT_ID=cloud 00:07:11.230 + uname -a 00:07:11.230 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:07:11.230 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh status 00:07:13.771 Hugepages 00:07:13.771 node hugesize free / total 00:07:13.771 node0 1048576kB 0 / 0 00:07:13.771 node0 2048kB 0 / 0 00:07:13.771 node1 1048576kB 0 / 0 00:07:13.771 node1 2048kB 0 / 0 00:07:13.771 00:07:13.771 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:13.771 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:13.771 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:13.771 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:13.771 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:13.771 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:13.771 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:13.771 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:13.771 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:13.771 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:07:13.771 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:13.771 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:13.771 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:13.771 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:13.771 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:13.771 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:13.771 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:13.771 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:13.771 + rm -f /tmp/spdk-ld-path 00:07:13.771 + source autorun-spdk.conf 00:07:13.771 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:13.771 ++ SPDK_TEST_NVMF=1 00:07:13.771 ++ SPDK_TEST_NVME_CLI=1 00:07:13.771 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:13.771 ++ SPDK_TEST_NVMF_NICS=e810 00:07:13.771 ++ SPDK_TEST_VFIOUSER=1 00:07:13.771 ++ SPDK_RUN_UBSAN=1 00:07:13.771 ++ NET_TYPE=phy 00:07:13.771 ++ RUN_NIGHTLY=0 00:07:13.771 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:13.771 + [[ -n '' ]] 00:07:13.771 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:07:13.771 + for M in /var/spdk/build-*-manifest.txt 00:07:13.771 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:13.771 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:07:13.771 + for M in /var/spdk/build-*-manifest.txt 00:07:13.771 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:13.771 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:07:13.771 + for M in /var/spdk/build-*-manifest.txt 00:07:13.771 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:13.771 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:07:13.771 ++ uname 00:07:13.771 + [[ Linux == \L\i\n\u\x ]] 00:07:13.771 + sudo dmesg -T 00:07:13.771 + sudo dmesg --clear 00:07:13.771 + dmesg_pid=93357 00:07:13.771 + [[ Fedora Linux == FreeBSD ]] 00:07:13.771 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:13.771 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:13.771 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:13.771 + sudo dmesg -Tw 00:07:13.771 + [[ -x /usr/src/fio-static/fio ]] 00:07:13.771 + export FIO_BIN=/usr/src/fio-static/fio 00:07:13.771 + FIO_BIN=/usr/src/fio-static/fio 00:07:13.771 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\_\2\/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:13.771 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:13.771 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:13.771 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:13.771 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:13.771 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:13.771 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:13.771 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:13.771 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:07:14.031 23:48:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:14.031 23:48:48 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:07:14.031 23:48:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:14.031 23:48:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:07:14.031 23:48:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:07:14.031 23:48:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:14.031 23:48:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:07:14.031 23:48:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:07:14.031 23:48:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:07:14.031 23:48:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:07:14.031 23:48:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:07:14.031 23:48:48 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:07:14.031 23:48:48 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:07:14.031 23:48:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:07:14.031 23:48:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:07:14.031 23:48:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:07:14.031 23:48:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:14.031 23:48:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.031 23:48:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.031 23:48:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.031 23:48:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.031 23:48:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.031 23:48:48 -- paths/export.sh@5 -- $ export PATH 00:07:14.031 23:48:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.031 23:48:48 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output 00:07:14.031 23:48:48 -- common/autobuild_common.sh@493 -- $ date +%s 00:07:14.031 23:48:48 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733784528.XXXXXX 00:07:14.031 23:48:48 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733784528.16NdpS 00:07:14.031 23:48:48 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:07:14.031 23:48:48 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:07:14.031 23:48:48 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/' 00:07:14.031 23:48:48 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/xnvme --exclude /tmp' 00:07:14.031 23:48:48 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/xnvme --exclude /tmp --status-bugs' 00:07:14.031 23:48:48 -- common/autobuild_common.sh@509 -- $ get_config_params 00:07:14.031 23:48:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:07:14.031 23:48:48 -- common/autotest_common.sh@10 -- $ set +x 00:07:14.031 23:48:48 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:07:14.031 23:48:48 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:07:14.031 23:48:48 -- pm/common@17 -- $ local monitor 00:07:14.031 23:48:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.031 23:48:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.031 23:48:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.031 23:48:48 -- pm/common@21 -- $ date +%s 00:07:14.031 23:48:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:14.031 23:48:48 -- pm/common@21 -- $ date +%s 00:07:14.031 23:48:48 -- pm/common@25 -- $ sleep 1 00:07:14.031 23:48:48 -- pm/common@21 -- $ date +%s 00:07:14.031 23:48:48 -- pm/common@21 -- $ date +%s 00:07:14.031 23:48:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733784528 00:07:14.031 23:48:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733784528 00:07:14.031 23:48:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733784528 00:07:14.031 23:48:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733784528 00:07:14.031 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733784528_collect-vmstat.pm.log 00:07:14.031 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733784528_collect-cpu-load.pm.log 00:07:14.031 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733784528_collect-cpu-temp.pm.log 00:07:14.031 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733784528_collect-bmc-pm.bmc.pm.log 00:07:14.969 23:48:49 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:07:14.969 23:48:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:14.969 23:48:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:14.969 23:48:49 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:07:14.969 23:48:49 -- spdk/autobuild.sh@16 -- $ date -u 00:07:14.969 Mon Dec 9 10:48:49 PM UTC 2024 00:07:14.969 23:48:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:15.228 v25.01-pre-311-gb6a18b192 00:07:15.228 23:48:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:15.228 23:48:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:15.228 23:48:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:15.228 23:48:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:15.228 23:48:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:15.228 23:48:49 -- common/autotest_common.sh@10 -- $ set +x 00:07:15.228 ************************************ 00:07:15.228 START TEST ubsan 00:07:15.228 ************************************ 00:07:15.228 23:48:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:07:15.228 using ubsan 00:07:15.228 00:07:15.228 real 0m0.000s 00:07:15.228 user 0m0.000s 00:07:15.228 sys 0m0.000s 00:07:15.228 23:48:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:15.228 23:48:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:15.228 ************************************ 00:07:15.228 END TEST ubsan 00:07:15.228 ************************************ 00:07:15.228 23:48:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:15.228 23:48:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:15.228 23:48:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:15.228 23:48:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:15.228 23:48:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:15.228 23:48:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:15.229 23:48:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:15.229 23:48:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:15.229 23:48:49 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:07:15.796 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:07:15.796 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:07:16.734 Using 'verbs' RDMA provider 00:07:32.600 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.spdk-isal.log)...done. 00:07:44.818 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.spdk-isal-crypto.log)...done. 00:07:44.818 Creating mk/config.mk...done. 00:07:44.818 Creating mk/cc.flags.mk...done. 00:07:44.818 Type 'make' to build. 00:07:44.818 23:49:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:07:44.818 23:49:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:44.818 23:49:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:44.818 23:49:18 -- common/autotest_common.sh@10 -- $ set +x 00:07:44.818 ************************************ 00:07:44.818 START TEST make 00:07:44.818 ************************************ 00:07:44.818 23:49:18 make -- common/autotest_common.sh@1129 -- $ make -j96 00:07:44.818 make[1]: Nothing to be done for 'all'. 00:07:46.730 The Meson build system 00:07:46.730 Version: 1.5.0 00:07:46.730 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user 00:07:46.730 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:07:46.730 Build type: native build 00:07:46.730 Project name: libvfio-user 00:07:46.730 Project version: 0.0.1 00:07:46.730 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:46.730 C linker for the host machine: cc ld.bfd 2.40-14 00:07:46.730 Host machine cpu family: x86_64 00:07:46.730 Host machine cpu: x86_64 00:07:46.730 Run-time dependency threads found: YES 00:07:46.730 Library dl found: YES 00:07:46.730 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:46.730 Run-time dependency json-c found: YES 0.17 00:07:46.731 Run-time dependency cmocka found: YES 1.1.7 00:07:46.731 Program pytest-3 found: NO 00:07:46.731 Program flake8 found: NO 00:07:46.731 Program misspell-fixer found: NO 00:07:46.731 Program restructuredtext-lint found: NO 00:07:46.731 Program valgrind found: YES (/usr/bin/valgrind) 00:07:46.731 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:46.731 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:46.731 Compiler for C supports arguments -Wwrite-strings: YES 00:07:46.731 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:46.731 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user/test/test-lspci.sh) 00:07:46.731 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user/test/test-linkage.sh) 00:07:46.731 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:46.731 Build targets in project: 8 00:07:46.731 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:07:46.731 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:07:46.731 00:07:46.731 libvfio-user 0.0.1 00:07:46.731 00:07:46.731 User defined options 00:07:46.731 buildtype : debug 00:07:46.731 default_library: shared 00:07:46.731 libdir : /usr/local/lib 00:07:46.731 00:07:46.731 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:46.731 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug' 00:07:46.990 [1/37] Compiling C object samples/null.p/null.c.o 00:07:46.990 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:07:46.990 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:07:46.990 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:07:46.990 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:07:46.990 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:07:46.990 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:07:46.990 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:07:46.990 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:07:46.990 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:07:46.990 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:07:46.990 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:07:46.990 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:07:46.990 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:07:46.990 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:07:46.990 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:07:46.990 [17/37] Compiling C object samples/server.p/server.c.o 00:07:46.990 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:07:46.990 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:07:46.990 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:07:46.990 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:07:46.990 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:07:46.990 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:07:46.990 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:07:46.990 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:07:46.990 [26/37] Compiling C object samples/client.p/client.c.o 00:07:46.990 [27/37] Linking target samples/client 00:07:46.990 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:07:46.990 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:07:47.251 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:07:47.251 [31/37] Linking target test/unit_tests 00:07:47.251 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:07:47.251 [33/37] Linking target samples/lspci 00:07:47.251 [34/37] Linking target samples/null 00:07:47.251 [35/37] Linking target samples/gpio-pci-idio-16 00:07:47.251 [36/37] Linking target samples/shadow_ioeventfd_server 00:07:47.251 [37/37] Linking target samples/server 00:07:47.251 INFO: autodetecting backend as ninja 00:07:47.251 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:07:47.511 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:07:47.771 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug' 00:07:47.771 ninja: no work to do. 00:07:53.055 The Meson build system 00:07:53.055 Version: 1.5.0 00:07:53.055 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk 00:07:53.055 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp 00:07:53.055 Build type: native build 00:07:53.055 Program cat found: YES (/usr/bin/cat) 00:07:53.055 Project name: DPDK 00:07:53.055 Project version: 24.03.0 00:07:53.055 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:53.055 C linker for the host machine: cc ld.bfd 2.40-14 00:07:53.055 Host machine cpu family: x86_64 00:07:53.055 Host machine cpu: x86_64 00:07:53.055 Message: ## Building in Developer Mode ## 00:07:53.055 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:53.055 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/buildtools/check-symbols.sh) 00:07:53.055 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:53.055 Program python3 found: YES (/usr/bin/python3) 00:07:53.055 Program cat found: YES (/usr/bin/cat) 00:07:53.055 Compiler for C supports arguments -march=native: YES 00:07:53.055 Checking for size of "void *" : 8 00:07:53.055 Checking for size of "void *" : 8 (cached) 00:07:53.055 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:53.055 Library m found: YES 00:07:53.055 Library numa found: YES 00:07:53.055 Has header "numaif.h" : YES 00:07:53.055 Library fdt found: NO 00:07:53.055 Library execinfo found: NO 00:07:53.055 Has header "execinfo.h" : YES 00:07:53.055 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:53.055 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:53.055 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:53.055 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:53.055 Run-time dependency openssl found: YES 3.1.1 00:07:53.055 Run-time dependency libpcap found: YES 1.10.4 00:07:53.055 Has header "pcap.h" with dependency libpcap: YES 00:07:53.055 Compiler for C supports arguments -Wcast-qual: YES 00:07:53.055 Compiler for C supports arguments -Wdeprecated: YES 00:07:53.055 Compiler for C supports arguments -Wformat: YES 00:07:53.055 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:53.055 Compiler for C supports arguments -Wformat-security: NO 00:07:53.055 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:53.055 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:53.055 Compiler for C supports arguments -Wnested-externs: YES 00:07:53.055 Compiler for C supports arguments -Wold-style-definition: YES 00:07:53.055 Compiler for C supports arguments -Wpointer-arith: YES 00:07:53.055 Compiler for C supports arguments -Wsign-compare: YES 00:07:53.055 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:53.055 Compiler for C supports arguments -Wundef: YES 00:07:53.055 Compiler for C supports arguments -Wwrite-strings: YES 00:07:53.055 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:53.055 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:53.055 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:53.055 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:53.055 Program objdump found: YES (/usr/bin/objdump) 00:07:53.055 Compiler for C supports arguments -mavx512f: YES 00:07:53.056 Checking if "AVX512 checking" compiles: YES 00:07:53.056 Fetching value of define "__SSE4_2__" : 1 00:07:53.056 Fetching value of define "__AES__" : 1 00:07:53.056 Fetching value of define "__AVX__" : 1 00:07:53.056 Fetching value of define "__AVX2__" : 1 00:07:53.056 Fetching value of define "__AVX512BW__" : 1 00:07:53.056 Fetching value of define "__AVX512CD__" : 1 00:07:53.056 Fetching value of define "__AVX512DQ__" : 1 00:07:53.056 Fetching value of define "__AVX512F__" : 1 00:07:53.056 Fetching value of define "__AVX512VL__" : 1 00:07:53.056 Fetching value of define "__PCLMUL__" : 1 00:07:53.056 Fetching value of define "__RDRND__" : 1 00:07:53.056 Fetching value of define "__RDSEED__" : 1 00:07:53.056 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:53.056 Fetching value of define "__znver1__" : (undefined) 00:07:53.056 Fetching value of define "__znver2__" : (undefined) 00:07:53.056 Fetching value of define "__znver3__" : (undefined) 00:07:53.056 Fetching value of define "__znver4__" : (undefined) 00:07:53.056 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:53.056 Message: lib/log: Defining dependency "log" 00:07:53.056 Message: lib/kvargs: Defining dependency "kvargs" 00:07:53.056 Message: lib/telemetry: Defining dependency "telemetry" 00:07:53.056 Checking for function "getentropy" : NO 00:07:53.056 Message: lib/eal: Defining dependency "eal" 00:07:53.056 Message: lib/ring: Defining dependency "ring" 00:07:53.056 Message: lib/rcu: Defining dependency "rcu" 00:07:53.056 Message: lib/mempool: Defining dependency "mempool" 00:07:53.056 Message: lib/mbuf: Defining dependency "mbuf" 00:07:53.056 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:53.056 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:53.056 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:53.056 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:53.056 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:53.056 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:53.056 Compiler for C supports arguments -mpclmul: YES 00:07:53.056 Compiler for C supports arguments -maes: YES 00:07:53.056 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:53.056 Compiler for C supports arguments -mavx512bw: YES 00:07:53.056 Compiler for C supports arguments -mavx512dq: YES 00:07:53.056 Compiler for C supports arguments -mavx512vl: YES 00:07:53.056 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:53.056 Compiler for C supports arguments -mavx2: YES 00:07:53.056 Compiler for C supports arguments -mavx: YES 00:07:53.056 Message: lib/net: Defining dependency "net" 00:07:53.056 Message: lib/meter: Defining dependency "meter" 00:07:53.056 Message: lib/ethdev: Defining dependency "ethdev" 00:07:53.056 Message: lib/pci: Defining dependency "pci" 00:07:53.056 Message: lib/cmdline: Defining dependency "cmdline" 00:07:53.056 Message: lib/hash: Defining dependency "hash" 00:07:53.056 Message: lib/timer: Defining dependency "timer" 00:07:53.056 Message: lib/compressdev: Defining dependency "compressdev" 00:07:53.056 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:53.056 Message: lib/dmadev: Defining dependency "dmadev" 00:07:53.056 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:53.056 Message: lib/power: Defining dependency "power" 00:07:53.056 Message: lib/reorder: Defining dependency "reorder" 00:07:53.056 Message: lib/security: Defining dependency "security" 00:07:53.056 Has header "linux/userfaultfd.h" : YES 00:07:53.056 Has header "linux/vduse.h" : YES 00:07:53.056 Message: lib/vhost: Defining dependency "vhost" 00:07:53.056 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:53.056 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:53.056 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:53.056 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:53.056 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:53.056 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:53.056 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:53.056 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:53.056 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:53.056 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:53.056 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:53.056 Configuring doxy-api-html.conf using configuration 00:07:53.056 Configuring doxy-api-man.conf using configuration 00:07:53.056 Program mandb found: YES (/usr/bin/mandb) 00:07:53.056 Program sphinx-build found: NO 00:07:53.056 Configuring rte_build_config.h using configuration 00:07:53.056 Message: 00:07:53.056 ================= 00:07:53.056 Applications Enabled 00:07:53.056 ================= 00:07:53.056 00:07:53.056 apps: 00:07:53.056 00:07:53.056 00:07:53.056 Message: 00:07:53.056 ================= 00:07:53.056 Libraries Enabled 00:07:53.056 ================= 00:07:53.056 00:07:53.056 libs: 00:07:53.056 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:53.056 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:53.056 cryptodev, dmadev, power, reorder, security, vhost, 00:07:53.056 00:07:53.056 Message: 00:07:53.056 =============== 00:07:53.056 Drivers Enabled 00:07:53.056 =============== 00:07:53.056 00:07:53.056 common: 00:07:53.056 00:07:53.056 bus: 00:07:53.056 pci, vdev, 00:07:53.056 mempool: 00:07:53.056 ring, 00:07:53.056 dma: 00:07:53.056 00:07:53.056 net: 00:07:53.056 00:07:53.056 crypto: 00:07:53.056 00:07:53.056 compress: 00:07:53.056 00:07:53.056 vdpa: 00:07:53.056 00:07:53.056 00:07:53.056 Message: 00:07:53.056 ================= 00:07:53.056 Content Skipped 00:07:53.056 ================= 00:07:53.056 00:07:53.056 apps: 00:07:53.056 dumpcap: explicitly disabled via build config 00:07:53.056 graph: explicitly disabled via build config 00:07:53.056 pdump: explicitly disabled via build config 00:07:53.056 proc-info: explicitly disabled via build config 00:07:53.056 test-acl: explicitly disabled via build config 00:07:53.056 test-bbdev: explicitly disabled via build config 00:07:53.056 test-cmdline: explicitly disabled via build config 00:07:53.056 test-compress-perf: explicitly disabled via build config 00:07:53.056 test-crypto-perf: explicitly disabled via build config 00:07:53.056 test-dma-perf: explicitly disabled via build config 00:07:53.056 test-eventdev: explicitly disabled via build config 00:07:53.056 test-fib: explicitly disabled via build config 00:07:53.056 test-flow-perf: explicitly disabled via build config 00:07:53.056 test-gpudev: explicitly disabled via build config 00:07:53.056 test-mldev: explicitly disabled via build config 00:07:53.056 test-pipeline: explicitly disabled via build config 00:07:53.056 test-pmd: explicitly disabled via build config 00:07:53.056 test-regex: explicitly disabled via build config 00:07:53.056 test-sad: explicitly disabled via build config 00:07:53.056 test-security-perf: explicitly disabled via build config 00:07:53.056 00:07:53.056 libs: 00:07:53.056 argparse: explicitly disabled via build config 00:07:53.056 metrics: explicitly disabled via build config 00:07:53.056 acl: explicitly disabled via build config 00:07:53.056 bbdev: explicitly disabled via build config 00:07:53.056 bitratestats: explicitly disabled via build config 00:07:53.056 bpf: explicitly disabled via build config 00:07:53.056 cfgfile: explicitly disabled via build config 00:07:53.056 distributor: explicitly disabled via build config 00:07:53.056 efd: explicitly disabled via build config 00:07:53.056 eventdev: explicitly disabled via build config 00:07:53.056 dispatcher: explicitly disabled via build config 00:07:53.056 gpudev: explicitly disabled via build config 00:07:53.056 gro: explicitly disabled via build config 00:07:53.056 gso: explicitly disabled via build config 00:07:53.056 ip_frag: explicitly disabled via build config 00:07:53.056 jobstats: explicitly disabled via build config 00:07:53.056 latencystats: explicitly disabled via build config 00:07:53.056 lpm: explicitly disabled via build config 00:07:53.056 member: explicitly disabled via build config 00:07:53.056 pcapng: explicitly disabled via build config 00:07:53.056 rawdev: explicitly disabled via build config 00:07:53.056 regexdev: explicitly disabled via build config 00:07:53.056 mldev: explicitly disabled via build config 00:07:53.056 rib: explicitly disabled via build config 00:07:53.056 sched: explicitly disabled via build config 00:07:53.056 stack: explicitly disabled via build config 00:07:53.056 ipsec: explicitly disabled via build config 00:07:53.056 pdcp: explicitly disabled via build config 00:07:53.056 fib: explicitly disabled via build config 00:07:53.056 port: explicitly disabled via build config 00:07:53.056 pdump: explicitly disabled via build config 00:07:53.056 table: explicitly disabled via build config 00:07:53.056 pipeline: explicitly disabled via build config 00:07:53.056 graph: explicitly disabled via build config 00:07:53.056 node: explicitly disabled via build config 00:07:53.056 00:07:53.056 drivers: 00:07:53.056 common/cpt: not in enabled drivers build config 00:07:53.056 common/dpaax: not in enabled drivers build config 00:07:53.056 common/iavf: not in enabled drivers build config 00:07:53.056 common/idpf: not in enabled drivers build config 00:07:53.056 common/ionic: not in enabled drivers build config 00:07:53.056 common/mvep: not in enabled drivers build config 00:07:53.056 common/octeontx: not in enabled drivers build config 00:07:53.056 bus/auxiliary: not in enabled drivers build config 00:07:53.056 bus/cdx: not in enabled drivers build config 00:07:53.056 bus/dpaa: not in enabled drivers build config 00:07:53.056 bus/fslmc: not in enabled drivers build config 00:07:53.056 bus/ifpga: not in enabled drivers build config 00:07:53.056 bus/platform: not in enabled drivers build config 00:07:53.056 bus/uacce: not in enabled drivers build config 00:07:53.056 bus/vmbus: not in enabled drivers build config 00:07:53.056 common/cnxk: not in enabled drivers build config 00:07:53.056 common/mlx5: not in enabled drivers build config 00:07:53.056 common/nfp: not in enabled drivers build config 00:07:53.056 common/nitrox: not in enabled drivers build config 00:07:53.056 common/qat: not in enabled drivers build config 00:07:53.056 common/sfc_efx: not in enabled drivers build config 00:07:53.056 mempool/bucket: not in enabled drivers build config 00:07:53.056 mempool/cnxk: not in enabled drivers build config 00:07:53.057 mempool/dpaa: not in enabled drivers build config 00:07:53.057 mempool/dpaa2: not in enabled drivers build config 00:07:53.057 mempool/octeontx: not in enabled drivers build config 00:07:53.057 mempool/stack: not in enabled drivers build config 00:07:53.057 dma/cnxk: not in enabled drivers build config 00:07:53.057 dma/dpaa: not in enabled drivers build config 00:07:53.057 dma/dpaa2: not in enabled drivers build config 00:07:53.057 dma/hisilicon: not in enabled drivers build config 00:07:53.057 dma/idxd: not in enabled drivers build config 00:07:53.057 dma/ioat: not in enabled drivers build config 00:07:53.057 dma/skeleton: not in enabled drivers build config 00:07:53.057 net/af_packet: not in enabled drivers build config 00:07:53.057 net/af_xdp: not in enabled drivers build config 00:07:53.057 net/ark: not in enabled drivers build config 00:07:53.057 net/atlantic: not in enabled drivers build config 00:07:53.057 net/avp: not in enabled drivers build config 00:07:53.057 net/axgbe: not in enabled drivers build config 00:07:53.057 net/bnx2x: not in enabled drivers build config 00:07:53.057 net/bnxt: not in enabled drivers build config 00:07:53.057 net/bonding: not in enabled drivers build config 00:07:53.057 net/cnxk: not in enabled drivers build config 00:07:53.057 net/cpfl: not in enabled drivers build config 00:07:53.057 net/cxgbe: not in enabled drivers build config 00:07:53.057 net/dpaa: not in enabled drivers build config 00:07:53.057 net/dpaa2: not in enabled drivers build config 00:07:53.057 net/e1000: not in enabled drivers build config 00:07:53.057 net/ena: not in enabled drivers build config 00:07:53.057 net/enetc: not in enabled drivers build config 00:07:53.057 net/enetfec: not in enabled drivers build config 00:07:53.057 net/enic: not in enabled drivers build config 00:07:53.057 net/failsafe: not in enabled drivers build config 00:07:53.057 net/fm10k: not in enabled drivers build config 00:07:53.057 net/gve: not in enabled drivers build config 00:07:53.057 net/hinic: not in enabled drivers build config 00:07:53.057 net/hns3: not in enabled drivers build config 00:07:53.057 net/i40e: not in enabled drivers build config 00:07:53.057 net/iavf: not in enabled drivers build config 00:07:53.057 net/ice: not in enabled drivers build config 00:07:53.057 net/idpf: not in enabled drivers build config 00:07:53.057 net/igc: not in enabled drivers build config 00:07:53.057 net/ionic: not in enabled drivers build config 00:07:53.057 net/ipn3ke: not in enabled drivers build config 00:07:53.057 net/ixgbe: not in enabled drivers build config 00:07:53.057 net/mana: not in enabled drivers build config 00:07:53.057 net/memif: not in enabled drivers build config 00:07:53.057 net/mlx4: not in enabled drivers build config 00:07:53.057 net/mlx5: not in enabled drivers build config 00:07:53.057 net/mvneta: not in enabled drivers build config 00:07:53.057 net/mvpp2: not in enabled drivers build config 00:07:53.057 net/netvsc: not in enabled drivers build config 00:07:53.057 net/nfb: not in enabled drivers build config 00:07:53.057 net/nfp: not in enabled drivers build config 00:07:53.057 net/ngbe: not in enabled drivers build config 00:07:53.057 net/null: not in enabled drivers build config 00:07:53.057 net/octeontx: not in enabled drivers build config 00:07:53.057 net/octeon_ep: not in enabled drivers build config 00:07:53.057 net/pcap: not in enabled drivers build config 00:07:53.057 net/pfe: not in enabled drivers build config 00:07:53.057 net/qede: not in enabled drivers build config 00:07:53.057 net/ring: not in enabled drivers build config 00:07:53.057 net/sfc: not in enabled drivers build config 00:07:53.057 net/softnic: not in enabled drivers build config 00:07:53.057 net/tap: not in enabled drivers build config 00:07:53.057 net/thunderx: not in enabled drivers build config 00:07:53.057 net/txgbe: not in enabled drivers build config 00:07:53.057 net/vdev_netvsc: not in enabled drivers build config 00:07:53.057 net/vhost: not in enabled drivers build config 00:07:53.057 net/virtio: not in enabled drivers build config 00:07:53.057 net/vmxnet3: not in enabled drivers build config 00:07:53.057 raw/*: missing internal dependency, "rawdev" 00:07:53.057 crypto/armv8: not in enabled drivers build config 00:07:53.057 crypto/bcmfs: not in enabled drivers build config 00:07:53.057 crypto/caam_jr: not in enabled drivers build config 00:07:53.057 crypto/ccp: not in enabled drivers build config 00:07:53.057 crypto/cnxk: not in enabled drivers build config 00:07:53.057 crypto/dpaa_sec: not in enabled drivers build config 00:07:53.057 crypto/dpaa2_sec: not in enabled drivers build config 00:07:53.057 crypto/ipsec_mb: not in enabled drivers build config 00:07:53.057 crypto/mlx5: not in enabled drivers build config 00:07:53.057 crypto/mvsam: not in enabled drivers build config 00:07:53.057 crypto/nitrox: not in enabled drivers build config 00:07:53.057 crypto/null: not in enabled drivers build config 00:07:53.057 crypto/octeontx: not in enabled drivers build config 00:07:53.057 crypto/openssl: not in enabled drivers build config 00:07:53.057 crypto/scheduler: not in enabled drivers build config 00:07:53.057 crypto/uadk: not in enabled drivers build config 00:07:53.057 crypto/virtio: not in enabled drivers build config 00:07:53.057 compress/isal: not in enabled drivers build config 00:07:53.057 compress/mlx5: not in enabled drivers build config 00:07:53.057 compress/nitrox: not in enabled drivers build config 00:07:53.057 compress/octeontx: not in enabled drivers build config 00:07:53.057 compress/zlib: not in enabled drivers build config 00:07:53.057 regex/*: missing internal dependency, "regexdev" 00:07:53.057 ml/*: missing internal dependency, "mldev" 00:07:53.057 vdpa/ifc: not in enabled drivers build config 00:07:53.057 vdpa/mlx5: not in enabled drivers build config 00:07:53.057 vdpa/nfp: not in enabled drivers build config 00:07:53.057 vdpa/sfc: not in enabled drivers build config 00:07:53.057 event/*: missing internal dependency, "eventdev" 00:07:53.057 baseband/*: missing internal dependency, "bbdev" 00:07:53.057 gpu/*: missing internal dependency, "gpudev" 00:07:53.057 00:07:53.057 00:07:53.057 Build targets in project: 85 00:07:53.057 00:07:53.057 DPDK 24.03.0 00:07:53.057 00:07:53.057 User defined options 00:07:53.057 buildtype : debug 00:07:53.057 default_library : shared 00:07:53.057 libdir : lib 00:07:53.057 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:07:53.057 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:53.057 c_link_args : 00:07:53.057 cpu_instruction_set: native 00:07:53.057 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:07:53.057 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:07:53.057 enable_docs : false 00:07:53.057 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:53.057 enable_kmods : false 00:07:53.057 max_lcores : 128 00:07:53.057 tests : false 00:07:53.057 00:07:53.057 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:53.057 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp' 00:07:53.057 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:53.057 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:53.057 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:53.057 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:53.057 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:53.057 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:53.057 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:53.057 [8/268] Linking static target lib/librte_kvargs.a 00:07:53.057 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:53.057 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:53.057 [11/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:53.057 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:53.057 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:53.057 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:53.057 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:53.318 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:53.318 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:53.318 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:53.318 [19/268] Linking static target lib/librte_log.a 00:07:53.318 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:53.318 [21/268] Linking static target lib/librte_pci.a 00:07:53.318 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:53.318 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:53.318 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:53.580 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:53.580 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:53.580 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:53.580 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:53.580 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:53.580 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:53.580 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:53.580 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:53.580 [33/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:53.580 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:53.580 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:53.580 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:53.580 [37/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:53.580 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:53.580 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:53.580 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:53.580 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:53.580 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:53.580 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:53.580 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:53.580 [45/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:53.580 [46/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:53.580 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:53.580 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:53.580 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:53.580 [50/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:53.580 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:53.580 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:53.580 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:53.580 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:53.580 [55/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:53.580 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:53.580 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:53.580 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:53.580 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:53.581 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:53.581 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:53.581 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:53.581 [63/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:53.581 [64/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:53.581 [65/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:53.581 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:53.581 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:53.581 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:53.581 [69/268] Linking static target lib/librte_ring.a 00:07:53.581 [70/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:53.581 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:53.581 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:53.581 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:53.581 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:53.581 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:53.581 [76/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:53.581 [77/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:53.581 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:53.581 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:53.581 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:53.581 [81/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:53.581 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:53.581 [83/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:53.581 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:53.581 [85/268] Linking static target lib/librte_meter.a 00:07:53.581 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:53.581 [87/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:53.840 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:53.840 [89/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:53.840 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:53.840 [91/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:53.840 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:53.840 [93/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:53.840 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:53.840 [95/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:53.840 [96/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:53.840 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:53.840 [98/268] Linking static target lib/librte_telemetry.a 00:07:53.840 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:53.840 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:53.840 [101/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:53.840 [102/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:53.840 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:53.840 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:53.840 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:53.840 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:53.840 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:53.840 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:53.840 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:53.840 [110/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:53.840 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:53.840 [112/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:53.840 [113/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:53.840 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:53.840 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:53.840 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:53.840 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:53.840 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:53.840 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:53.840 [120/268] Linking static target lib/librte_mempool.a 00:07:53.840 [121/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:53.840 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:53.840 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:53.840 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:53.840 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:53.840 [126/268] Linking static target lib/librte_rcu.a 00:07:53.840 [127/268] Linking static target lib/librte_eal.a 00:07:53.840 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:53.840 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:53.840 [130/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:53.840 [131/268] Linking static target lib/librte_cmdline.a 00:07:53.840 [132/268] Linking static target lib/librte_net.a 00:07:53.840 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:53.840 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:53.840 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:53.840 [136/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:53.840 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:53.840 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.100 [139/268] Linking target lib/librte_log.so.24.1 00:07:54.100 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:54.100 [141/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:54.100 [142/268] Linking static target lib/librte_mbuf.a 00:07:54.100 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:54.100 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:54.100 [145/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:54.100 [146/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:54.100 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:54.100 [148/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:54.100 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:54.100 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:54.100 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:54.100 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:54.100 [153/268] Linking static target lib/librte_timer.a 00:07:54.100 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:54.100 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:54.100 [156/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:54.100 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:54.100 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:54.100 [159/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.100 [160/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:54.100 [161/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.100 [162/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:54.100 [163/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:54.100 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:54.100 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:54.100 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:54.100 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:54.100 [168/268] Linking static target lib/librte_reorder.a 00:07:54.100 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:54.100 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:54.100 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:54.100 [172/268] Linking static target lib/librte_dmadev.a 00:07:54.100 [173/268] Linking target lib/librte_kvargs.so.24.1 00:07:54.100 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:54.100 [175/268] Linking target lib/librte_telemetry.so.24.1 00:07:54.100 [176/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.100 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:54.100 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:54.100 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:54.100 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:54.100 [181/268] Linking static target lib/librte_compressdev.a 00:07:54.100 [182/268] Linking static target lib/librte_power.a 00:07:54.100 [183/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:54.360 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:54.360 [185/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:54.360 [186/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:54.360 [187/268] Linking static target lib/librte_security.a 00:07:54.360 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:54.360 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:54.360 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:54.360 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:54.360 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:54.360 [193/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:54.360 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:54.360 [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:54.360 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:54.360 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:54.360 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:54.360 [199/268] Linking static target drivers/librte_bus_vdev.a 00:07:54.360 [200/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:54.360 [201/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:54.360 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:54.360 [203/268] Linking static target lib/librte_hash.a 00:07:54.360 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:54.360 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:54.360 [206/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:54.619 [207/268] Linking static target drivers/librte_bus_pci.a 00:07:54.620 [208/268] Linking static target lib/librte_cryptodev.a 00:07:54.620 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.620 [210/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.620 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:54.620 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.620 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:54.620 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:54.620 [215/268] Linking static target drivers/librte_mempool_ring.a 00:07:54.620 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.879 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:54.879 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.879 [219/268] Linking static target lib/librte_ethdev.a 00:07:54.879 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.879 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.879 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.879 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:54.879 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:55.138 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:55.138 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:55.397 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:55.965 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:55.965 [229/268] Linking static target lib/librte_vhost.a 00:07:56.533 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:57.913 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:03.187 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:03.756 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:03.756 [234/268] Linking target lib/librte_eal.so.24.1 00:08:04.016 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:04.016 [236/268] Linking target lib/librte_timer.so.24.1 00:08:04.016 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:08:04.016 [238/268] Linking target lib/librte_dmadev.so.24.1 00:08:04.016 [239/268] Linking target lib/librte_ring.so.24.1 00:08:04.016 [240/268] Linking target lib/librte_meter.so.24.1 00:08:04.016 [241/268] Linking target lib/librte_pci.so.24.1 00:08:04.016 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:04.016 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:04.016 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:04.016 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:04.016 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:04.275 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:08:04.275 [248/268] Linking target lib/librte_rcu.so.24.1 00:08:04.275 [249/268] Linking target lib/librte_mempool.so.24.1 00:08:04.275 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:04.275 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:04.275 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:08:04.275 [253/268] Linking target lib/librte_mbuf.so.24.1 00:08:04.533 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:04.533 [255/268] Linking target lib/librte_net.so.24.1 00:08:04.533 [256/268] Linking target lib/librte_reorder.so.24.1 00:08:04.533 [257/268] Linking target lib/librte_compressdev.so.24.1 00:08:04.533 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:08:04.533 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:04.534 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:04.792 [261/268] Linking target lib/librte_hash.so.24.1 00:08:04.792 [262/268] Linking target lib/librte_security.so.24.1 00:08:04.792 [263/268] Linking target lib/librte_cmdline.so.24.1 00:08:04.792 [264/268] Linking target lib/librte_ethdev.so.24.1 00:08:04.792 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:04.792 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:04.792 [267/268] Linking target lib/librte_power.so.24.1 00:08:04.792 [268/268] Linking target lib/librte_vhost.so.24.1 00:08:04.792 INFO: autodetecting backend as ninja 00:08:04.792 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp -j 96 00:08:17.011 CC lib/ut/ut.o 00:08:17.011 CC lib/ut_mock/mock.o 00:08:17.011 CC lib/log/log.o 00:08:17.011 CC lib/log/log_flags.o 00:08:17.011 CC lib/log/log_deprecated.o 00:08:17.011 LIB libspdk_ut.a 00:08:17.011 LIB libspdk_ut_mock.a 00:08:17.011 LIB libspdk_log.a 00:08:17.011 SO libspdk_ut.so.2.0 00:08:17.011 SO libspdk_ut_mock.so.6.0 00:08:17.011 SO libspdk_log.so.7.1 00:08:17.011 SYMLINK libspdk_ut.so 00:08:17.011 SYMLINK libspdk_ut_mock.so 00:08:17.011 SYMLINK libspdk_log.so 00:08:17.011 CC lib/dma/dma.o 00:08:17.011 CXX lib/trace_parser/trace.o 00:08:17.011 CC lib/util/base64.o 00:08:17.011 CC lib/ioat/ioat.o 00:08:17.011 CC lib/util/bit_array.o 00:08:17.011 CC lib/util/cpuset.o 00:08:17.011 CC lib/util/crc16.o 00:08:17.011 CC lib/util/crc32.o 00:08:17.011 CC lib/util/crc32c.o 00:08:17.011 CC lib/util/crc32_ieee.o 00:08:17.011 CC lib/util/crc64.o 00:08:17.011 CC lib/util/fd.o 00:08:17.011 CC lib/util/dif.o 00:08:17.011 CC lib/util/fd_group.o 00:08:17.011 CC lib/util/file.o 00:08:17.011 CC lib/util/hexlify.o 00:08:17.011 CC lib/util/iov.o 00:08:17.011 CC lib/util/math.o 00:08:17.011 CC lib/util/net.o 00:08:17.011 CC lib/util/pipe.o 00:08:17.011 CC lib/util/string.o 00:08:17.011 CC lib/util/strerror_tls.o 00:08:17.011 CC lib/util/uuid.o 00:08:17.011 CC lib/util/xor.o 00:08:17.011 CC lib/util/zipf.o 00:08:17.011 CC lib/util/md5.o 00:08:17.011 CC lib/vfio_user/host/vfio_user_pci.o 00:08:17.011 CC lib/vfio_user/host/vfio_user.o 00:08:17.011 LIB libspdk_dma.a 00:08:17.011 SO libspdk_dma.so.5.0 00:08:17.011 SYMLINK libspdk_dma.so 00:08:17.011 LIB libspdk_ioat.a 00:08:17.011 SO libspdk_ioat.so.7.0 00:08:17.011 LIB libspdk_vfio_user.a 00:08:17.011 SYMLINK libspdk_ioat.so 00:08:17.011 SO libspdk_vfio_user.so.5.0 00:08:17.011 SYMLINK libspdk_vfio_user.so 00:08:17.011 LIB libspdk_util.a 00:08:17.011 SO libspdk_util.so.10.1 00:08:17.011 SYMLINK libspdk_util.so 00:08:17.011 CC lib/vmd/vmd.o 00:08:17.011 CC lib/vmd/led.o 00:08:17.011 CC lib/conf/conf.o 00:08:17.011 CC lib/rdma_utils/rdma_utils.o 00:08:17.011 CC lib/idxd/idxd.o 00:08:17.011 CC lib/idxd/idxd_user.o 00:08:17.011 CC lib/idxd/idxd_kernel.o 00:08:17.011 CC lib/env_dpdk/env.o 00:08:17.011 CC lib/json/json_parse.o 00:08:17.011 CC lib/env_dpdk/memory.o 00:08:17.011 CC lib/json/json_util.o 00:08:17.011 CC lib/env_dpdk/pci.o 00:08:17.011 CC lib/json/json_write.o 00:08:17.011 CC lib/env_dpdk/init.o 00:08:17.011 CC lib/env_dpdk/threads.o 00:08:17.011 CC lib/env_dpdk/pci_ioat.o 00:08:17.011 CC lib/env_dpdk/pci_virtio.o 00:08:17.011 CC lib/env_dpdk/pci_vmd.o 00:08:17.011 CC lib/env_dpdk/pci_idxd.o 00:08:17.011 CC lib/env_dpdk/pci_event.o 00:08:17.011 CC lib/env_dpdk/sigbus_handler.o 00:08:17.011 CC lib/env_dpdk/pci_dpdk.o 00:08:17.011 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:17.011 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:17.270 LIB libspdk_conf.a 00:08:17.270 LIB libspdk_rdma_utils.a 00:08:17.270 SO libspdk_conf.so.6.0 00:08:17.270 SO libspdk_rdma_utils.so.1.0 00:08:17.270 LIB libspdk_json.a 00:08:17.529 SYMLINK libspdk_conf.so 00:08:17.529 SO libspdk_json.so.6.0 00:08:17.529 SYMLINK libspdk_rdma_utils.so 00:08:17.529 SYMLINK libspdk_json.so 00:08:17.529 LIB libspdk_idxd.a 00:08:17.529 LIB libspdk_vmd.a 00:08:17.529 SO libspdk_idxd.so.12.1 00:08:17.788 SO libspdk_vmd.so.6.0 00:08:17.788 SYMLINK libspdk_idxd.so 00:08:17.788 SYMLINK libspdk_vmd.so 00:08:17.788 CC lib/rdma_provider/common.o 00:08:17.788 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:17.788 CC lib/jsonrpc/jsonrpc_server.o 00:08:17.788 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:17.788 CC lib/jsonrpc/jsonrpc_client.o 00:08:17.788 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:17.788 LIB libspdk_trace_parser.a 00:08:17.788 SO libspdk_trace_parser.so.6.0 00:08:18.047 SYMLINK libspdk_trace_parser.so 00:08:18.047 LIB libspdk_rdma_provider.a 00:08:18.047 SO libspdk_rdma_provider.so.7.0 00:08:18.047 LIB libspdk_jsonrpc.a 00:08:18.047 SYMLINK libspdk_rdma_provider.so 00:08:18.047 SO libspdk_jsonrpc.so.6.0 00:08:18.047 SYMLINK libspdk_jsonrpc.so 00:08:18.047 LIB libspdk_env_dpdk.a 00:08:18.306 SO libspdk_env_dpdk.so.15.1 00:08:18.306 SYMLINK libspdk_env_dpdk.so 00:08:18.306 CC lib/rpc/rpc.o 00:08:18.566 LIB libspdk_rpc.a 00:08:18.566 SO libspdk_rpc.so.6.0 00:08:18.566 SYMLINK libspdk_rpc.so 00:08:19.136 CC lib/trace/trace.o 00:08:19.136 CC lib/trace/trace_flags.o 00:08:19.136 CC lib/trace/trace_rpc.o 00:08:19.136 CC lib/notify/notify.o 00:08:19.136 CC lib/notify/notify_rpc.o 00:08:19.136 CC lib/keyring/keyring.o 00:08:19.136 CC lib/keyring/keyring_rpc.o 00:08:19.136 LIB libspdk_notify.a 00:08:19.136 SO libspdk_notify.so.6.0 00:08:19.136 LIB libspdk_trace.a 00:08:19.136 LIB libspdk_keyring.a 00:08:19.136 SO libspdk_trace.so.11.0 00:08:19.136 SYMLINK libspdk_notify.so 00:08:19.136 SO libspdk_keyring.so.2.0 00:08:19.396 SYMLINK libspdk_trace.so 00:08:19.396 SYMLINK libspdk_keyring.so 00:08:19.655 CC lib/sock/sock.o 00:08:19.655 CC lib/sock/sock_rpc.o 00:08:19.655 CC lib/thread/thread.o 00:08:19.655 CC lib/thread/iobuf.o 00:08:19.915 LIB libspdk_sock.a 00:08:19.915 SO libspdk_sock.so.10.0 00:08:20.174 SYMLINK libspdk_sock.so 00:08:20.435 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:20.435 CC lib/nvme/nvme_ctrlr.o 00:08:20.435 CC lib/nvme/nvme_fabric.o 00:08:20.435 CC lib/nvme/nvme_ns_cmd.o 00:08:20.435 CC lib/nvme/nvme_ns.o 00:08:20.435 CC lib/nvme/nvme_pcie_common.o 00:08:20.435 CC lib/nvme/nvme_pcie.o 00:08:20.435 CC lib/nvme/nvme_qpair.o 00:08:20.435 CC lib/nvme/nvme.o 00:08:20.435 CC lib/nvme/nvme_quirks.o 00:08:20.435 CC lib/nvme/nvme_transport.o 00:08:20.435 CC lib/nvme/nvme_discovery.o 00:08:20.435 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:20.435 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:20.435 CC lib/nvme/nvme_tcp.o 00:08:20.435 CC lib/nvme/nvme_opal.o 00:08:20.435 CC lib/nvme/nvme_io_msg.o 00:08:20.435 CC lib/nvme/nvme_poll_group.o 00:08:20.435 CC lib/nvme/nvme_zns.o 00:08:20.435 CC lib/nvme/nvme_stubs.o 00:08:20.435 CC lib/nvme/nvme_auth.o 00:08:20.435 CC lib/nvme/nvme_cuse.o 00:08:20.435 CC lib/nvme/nvme_vfio_user.o 00:08:20.435 CC lib/nvme/nvme_rdma.o 00:08:20.694 LIB libspdk_thread.a 00:08:20.694 SO libspdk_thread.so.11.0 00:08:20.694 SYMLINK libspdk_thread.so 00:08:20.953 CC lib/fsdev/fsdev_rpc.o 00:08:20.953 CC lib/fsdev/fsdev.o 00:08:20.953 CC lib/fsdev/fsdev_io.o 00:08:20.953 CC lib/accel/accel.o 00:08:20.953 CC lib/accel/accel_rpc.o 00:08:20.953 CC lib/accel/accel_sw.o 00:08:21.213 CC lib/init/json_config.o 00:08:21.213 CC lib/init/subsystem.o 00:08:21.213 CC lib/init/subsystem_rpc.o 00:08:21.213 CC lib/vfu_tgt/tgt_endpoint.o 00:08:21.213 CC lib/vfu_tgt/tgt_rpc.o 00:08:21.213 CC lib/init/rpc.o 00:08:21.213 CC lib/blob/blobstore.o 00:08:21.213 CC lib/blob/request.o 00:08:21.213 CC lib/blob/blob_bs_dev.o 00:08:21.213 CC lib/blob/zeroes.o 00:08:21.213 CC lib/virtio/virtio.o 00:08:21.213 CC lib/virtio/virtio_vhost_user.o 00:08:21.213 CC lib/virtio/virtio_vfio_user.o 00:08:21.213 CC lib/virtio/virtio_pci.o 00:08:21.213 LIB libspdk_init.a 00:08:21.213 SO libspdk_init.so.6.0 00:08:21.473 LIB libspdk_vfu_tgt.a 00:08:21.473 LIB libspdk_virtio.a 00:08:21.473 SO libspdk_vfu_tgt.so.3.0 00:08:21.473 SYMLINK libspdk_init.so 00:08:21.473 SO libspdk_virtio.so.7.0 00:08:21.473 SYMLINK libspdk_vfu_tgt.so 00:08:21.473 SYMLINK libspdk_virtio.so 00:08:21.473 LIB libspdk_fsdev.a 00:08:21.734 SO libspdk_fsdev.so.2.0 00:08:21.734 SYMLINK libspdk_fsdev.so 00:08:21.734 CC lib/event/app.o 00:08:21.734 CC lib/event/reactor.o 00:08:21.734 CC lib/event/log_rpc.o 00:08:21.734 CC lib/event/app_rpc.o 00:08:21.734 CC lib/event/scheduler_static.o 00:08:21.992 LIB libspdk_accel.a 00:08:21.992 SO libspdk_accel.so.16.0 00:08:21.992 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:21.992 LIB libspdk_nvme.a 00:08:21.992 SYMLINK libspdk_accel.so 00:08:21.992 LIB libspdk_event.a 00:08:21.992 SO libspdk_event.so.14.0 00:08:21.992 SO libspdk_nvme.so.15.0 00:08:22.251 SYMLINK libspdk_event.so 00:08:22.251 SYMLINK libspdk_nvme.so 00:08:22.251 CC lib/bdev/bdev.o 00:08:22.251 CC lib/bdev/bdev_rpc.o 00:08:22.251 CC lib/bdev/bdev_zone.o 00:08:22.251 CC lib/bdev/part.o 00:08:22.251 CC lib/bdev/scsi_nvme.o 00:08:22.511 LIB libspdk_fuse_dispatcher.a 00:08:22.511 SO libspdk_fuse_dispatcher.so.1.0 00:08:22.511 SYMLINK libspdk_fuse_dispatcher.so 00:08:23.447 LIB libspdk_blob.a 00:08:23.447 SO libspdk_blob.so.12.0 00:08:23.447 SYMLINK libspdk_blob.so 00:08:23.706 CC lib/blobfs/blobfs.o 00:08:23.706 CC lib/blobfs/tree.o 00:08:23.706 CC lib/lvol/lvol.o 00:08:24.275 LIB libspdk_bdev.a 00:08:24.275 SO libspdk_bdev.so.17.0 00:08:24.275 LIB libspdk_blobfs.a 00:08:24.275 SO libspdk_blobfs.so.11.0 00:08:24.275 SYMLINK libspdk_bdev.so 00:08:24.275 LIB libspdk_lvol.a 00:08:24.275 SYMLINK libspdk_blobfs.so 00:08:24.534 SO libspdk_lvol.so.11.0 00:08:24.534 SYMLINK libspdk_lvol.so 00:08:24.794 CC lib/nvmf/ctrlr.o 00:08:24.794 CC lib/ublk/ublk.o 00:08:24.794 CC lib/nvmf/ctrlr_discovery.o 00:08:24.794 CC lib/ublk/ublk_rpc.o 00:08:24.794 CC lib/nvmf/subsystem.o 00:08:24.794 CC lib/nvmf/ctrlr_bdev.o 00:08:24.794 CC lib/nvmf/nvmf.o 00:08:24.794 CC lib/nvmf/nvmf_rpc.o 00:08:24.794 CC lib/scsi/dev.o 00:08:24.794 CC lib/scsi/lun.o 00:08:24.794 CC lib/nvmf/transport.o 00:08:24.794 CC lib/scsi/port.o 00:08:24.794 CC lib/nvmf/tcp.o 00:08:24.794 CC lib/nbd/nbd.o 00:08:24.794 CC lib/nvmf/stubs.o 00:08:24.794 CC lib/scsi/scsi.o 00:08:24.794 CC lib/nvmf/mdns_server.o 00:08:24.794 CC lib/nbd/nbd_rpc.o 00:08:24.794 CC lib/nvmf/vfio_user.o 00:08:24.794 CC lib/scsi/scsi_bdev.o 00:08:24.794 CC lib/scsi/scsi_pr.o 00:08:24.794 CC lib/nvmf/rdma.o 00:08:24.794 CC lib/scsi/scsi_rpc.o 00:08:24.794 CC lib/ftl/ftl_core.o 00:08:24.794 CC lib/nvmf/auth.o 00:08:24.794 CC lib/ftl/ftl_init.o 00:08:24.794 CC lib/scsi/task.o 00:08:24.794 CC lib/ftl/ftl_layout.o 00:08:24.794 CC lib/ftl/ftl_debug.o 00:08:24.794 CC lib/ftl/ftl_io.o 00:08:24.794 CC lib/ftl/ftl_sb.o 00:08:24.794 CC lib/ftl/ftl_l2p.o 00:08:24.794 CC lib/ftl/ftl_l2p_flat.o 00:08:24.794 CC lib/ftl/ftl_nv_cache.o 00:08:24.794 CC lib/ftl/ftl_band.o 00:08:24.794 CC lib/ftl/ftl_band_ops.o 00:08:24.794 CC lib/ftl/ftl_rq.o 00:08:24.794 CC lib/ftl/ftl_writer.o 00:08:24.794 CC lib/ftl/ftl_reloc.o 00:08:24.794 CC lib/ftl/ftl_l2p_cache.o 00:08:24.794 CC lib/ftl/ftl_p2l.o 00:08:24.794 CC lib/ftl/ftl_p2l_log.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:24.794 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:24.794 CC lib/ftl/utils/ftl_conf.o 00:08:24.794 CC lib/ftl/utils/ftl_mempool.o 00:08:24.794 CC lib/ftl/utils/ftl_md.o 00:08:24.794 CC lib/ftl/utils/ftl_bitmap.o 00:08:24.794 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:24.794 CC lib/ftl/utils/ftl_property.o 00:08:24.794 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:24.794 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:24.794 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:24.794 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:24.794 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:24.794 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:24.794 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:24.794 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:24.794 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:24.794 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:24.794 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:24.794 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:24.794 CC lib/ftl/base/ftl_base_dev.o 00:08:24.794 CC lib/ftl/base/ftl_base_bdev.o 00:08:24.794 CC lib/ftl/ftl_trace.o 00:08:25.361 LIB libspdk_nbd.a 00:08:25.361 SO libspdk_nbd.so.7.0 00:08:25.361 SYMLINK libspdk_nbd.so 00:08:25.361 LIB libspdk_scsi.a 00:08:25.361 LIB libspdk_ublk.a 00:08:25.361 SO libspdk_scsi.so.9.0 00:08:25.361 SO libspdk_ublk.so.3.0 00:08:25.620 SYMLINK libspdk_ublk.so 00:08:25.620 SYMLINK libspdk_scsi.so 00:08:25.620 LIB libspdk_ftl.a 00:08:25.620 SO libspdk_ftl.so.9.0 00:08:25.878 CC lib/iscsi/conn.o 00:08:25.878 CC lib/iscsi/init_grp.o 00:08:25.878 CC lib/iscsi/iscsi.o 00:08:25.878 CC lib/iscsi/param.o 00:08:25.878 CC lib/iscsi/portal_grp.o 00:08:25.878 CC lib/iscsi/tgt_node.o 00:08:25.878 CC lib/iscsi/iscsi_subsystem.o 00:08:25.878 CC lib/iscsi/iscsi_rpc.o 00:08:25.878 CC lib/iscsi/task.o 00:08:25.878 CC lib/vhost/vhost.o 00:08:25.878 CC lib/vhost/vhost_rpc.o 00:08:25.878 CC lib/vhost/vhost_scsi.o 00:08:25.878 CC lib/vhost/vhost_blk.o 00:08:25.878 CC lib/vhost/rte_vhost_user.o 00:08:25.878 SYMLINK libspdk_ftl.so 00:08:26.446 LIB libspdk_nvmf.a 00:08:26.705 SO libspdk_nvmf.so.20.0 00:08:26.705 LIB libspdk_vhost.a 00:08:26.705 SO libspdk_vhost.so.8.0 00:08:26.705 SYMLINK libspdk_nvmf.so 00:08:26.705 SYMLINK libspdk_vhost.so 00:08:26.705 LIB libspdk_iscsi.a 00:08:26.965 SO libspdk_iscsi.so.8.0 00:08:26.965 SYMLINK libspdk_iscsi.so 00:08:27.534 CC module/env_dpdk/env_dpdk_rpc.o 00:08:27.534 CC module/vfu_device/vfu_virtio.o 00:08:27.534 CC module/vfu_device/vfu_virtio_blk.o 00:08:27.534 CC module/vfu_device/vfu_virtio_scsi.o 00:08:27.534 CC module/vfu_device/vfu_virtio_rpc.o 00:08:27.534 CC module/vfu_device/vfu_virtio_fs.o 00:08:27.534 CC module/accel/ioat/accel_ioat_rpc.o 00:08:27.534 CC module/accel/ioat/accel_ioat.o 00:08:27.534 CC module/accel/error/accel_error.o 00:08:27.534 CC module/accel/iaa/accel_iaa.o 00:08:27.534 CC module/accel/error/accel_error_rpc.o 00:08:27.534 CC module/accel/iaa/accel_iaa_rpc.o 00:08:27.534 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:27.534 CC module/sock/posix/posix.o 00:08:27.534 CC module/fsdev/aio/fsdev_aio.o 00:08:27.534 CC module/fsdev/aio/linux_aio_mgr.o 00:08:27.534 CC module/blob/bdev/blob_bdev.o 00:08:27.534 CC module/accel/dsa/accel_dsa_rpc.o 00:08:27.534 CC module/accel/dsa/accel_dsa.o 00:08:27.534 LIB libspdk_env_dpdk_rpc.a 00:08:27.534 CC module/keyring/linux/keyring.o 00:08:27.534 CC module/keyring/file/keyring.o 00:08:27.534 CC module/keyring/file/keyring_rpc.o 00:08:27.534 CC module/keyring/linux/keyring_rpc.o 00:08:27.534 CC module/scheduler/gscheduler/gscheduler.o 00:08:27.534 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:27.534 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:27.534 SO libspdk_env_dpdk_rpc.so.6.0 00:08:27.793 SYMLINK libspdk_env_dpdk_rpc.so 00:08:27.793 LIB libspdk_keyring_linux.a 00:08:27.793 LIB libspdk_keyring_file.a 00:08:27.793 LIB libspdk_accel_ioat.a 00:08:27.793 SO libspdk_keyring_linux.so.1.0 00:08:27.793 LIB libspdk_scheduler_gscheduler.a 00:08:27.793 SO libspdk_keyring_file.so.2.0 00:08:27.793 LIB libspdk_accel_error.a 00:08:27.793 LIB libspdk_scheduler_dpdk_governor.a 00:08:27.793 LIB libspdk_accel_iaa.a 00:08:27.793 SO libspdk_accel_ioat.so.6.0 00:08:27.793 LIB libspdk_scheduler_dynamic.a 00:08:27.793 SO libspdk_accel_error.so.2.0 00:08:27.793 SO libspdk_scheduler_gscheduler.so.4.0 00:08:27.793 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:27.793 SO libspdk_accel_iaa.so.3.0 00:08:27.793 SYMLINK libspdk_keyring_linux.so 00:08:27.793 LIB libspdk_blob_bdev.a 00:08:27.793 SYMLINK libspdk_keyring_file.so 00:08:27.793 SO libspdk_scheduler_dynamic.so.4.0 00:08:27.793 SYMLINK libspdk_accel_ioat.so 00:08:27.793 LIB libspdk_accel_dsa.a 00:08:27.793 SYMLINK libspdk_accel_error.so 00:08:27.793 SYMLINK libspdk_scheduler_gscheduler.so 00:08:27.793 SYMLINK libspdk_accel_iaa.so 00:08:27.793 SO libspdk_blob_bdev.so.12.0 00:08:27.793 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:27.793 SYMLINK libspdk_scheduler_dynamic.so 00:08:27.793 SO libspdk_accel_dsa.so.5.0 00:08:28.053 SYMLINK libspdk_blob_bdev.so 00:08:28.053 LIB libspdk_vfu_device.a 00:08:28.053 SYMLINK libspdk_accel_dsa.so 00:08:28.053 SO libspdk_vfu_device.so.3.0 00:08:28.053 SYMLINK libspdk_vfu_device.so 00:08:28.053 LIB libspdk_fsdev_aio.a 00:08:28.312 SO libspdk_fsdev_aio.so.1.0 00:08:28.312 LIB libspdk_sock_posix.a 00:08:28.312 SO libspdk_sock_posix.so.6.0 00:08:28.312 SYMLINK libspdk_fsdev_aio.so 00:08:28.312 SYMLINK libspdk_sock_posix.so 00:08:28.312 CC module/bdev/nvme/bdev_nvme.o 00:08:28.312 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:28.312 CC module/bdev/nvme/nvme_rpc.o 00:08:28.312 CC module/bdev/nvme/bdev_mdns_client.o 00:08:28.312 CC module/bdev/nvme/vbdev_opal.o 00:08:28.312 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:28.312 CC module/bdev/gpt/gpt.o 00:08:28.312 CC module/bdev/gpt/vbdev_gpt.o 00:08:28.312 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:28.312 CC module/bdev/malloc/bdev_malloc.o 00:08:28.312 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:28.312 CC module/bdev/delay/vbdev_delay.o 00:08:28.312 CC module/bdev/aio/bdev_aio.o 00:08:28.312 CC module/bdev/lvol/vbdev_lvol.o 00:08:28.312 CC module/bdev/aio/bdev_aio_rpc.o 00:08:28.312 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:28.312 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:28.312 CC module/bdev/raid/bdev_raid.o 00:08:28.312 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:28.312 CC module/bdev/error/vbdev_error.o 00:08:28.312 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:28.312 CC module/bdev/passthru/vbdev_passthru.o 00:08:28.312 CC module/bdev/error/vbdev_error_rpc.o 00:08:28.312 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:28.312 CC module/bdev/raid/bdev_raid_rpc.o 00:08:28.312 CC module/bdev/split/vbdev_split_rpc.o 00:08:28.312 CC module/bdev/split/vbdev_split.o 00:08:28.312 CC module/bdev/raid/bdev_raid_sb.o 00:08:28.312 CC module/bdev/raid/raid0.o 00:08:28.312 CC module/blobfs/bdev/blobfs_bdev.o 00:08:28.312 CC module/bdev/raid/concat.o 00:08:28.312 CC module/bdev/raid/raid1.o 00:08:28.312 CC module/bdev/null/bdev_null.o 00:08:28.312 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:28.312 CC module/bdev/iscsi/bdev_iscsi.o 00:08:28.312 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:28.312 CC module/bdev/null/bdev_null_rpc.o 00:08:28.312 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:28.312 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:28.312 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:28.312 CC module/bdev/ftl/bdev_ftl.o 00:08:28.313 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:28.571 LIB libspdk_blobfs_bdev.a 00:08:28.571 SO libspdk_blobfs_bdev.so.6.0 00:08:28.571 LIB libspdk_bdev_gpt.a 00:08:28.831 LIB libspdk_bdev_split.a 00:08:28.831 LIB libspdk_bdev_null.a 00:08:28.831 SO libspdk_bdev_gpt.so.6.0 00:08:28.831 LIB libspdk_bdev_passthru.a 00:08:28.831 SYMLINK libspdk_blobfs_bdev.so 00:08:28.831 SO libspdk_bdev_split.so.6.0 00:08:28.831 LIB libspdk_bdev_ftl.a 00:08:28.831 SO libspdk_bdev_null.so.6.0 00:08:28.831 LIB libspdk_bdev_error.a 00:08:28.831 SO libspdk_bdev_passthru.so.6.0 00:08:28.831 LIB libspdk_bdev_aio.a 00:08:28.831 SO libspdk_bdev_ftl.so.6.0 00:08:28.831 SYMLINK libspdk_bdev_gpt.so 00:08:28.831 SO libspdk_bdev_error.so.6.0 00:08:28.831 LIB libspdk_bdev_malloc.a 00:08:28.831 SO libspdk_bdev_aio.so.6.0 00:08:28.831 LIB libspdk_bdev_zone_block.a 00:08:28.831 LIB libspdk_bdev_iscsi.a 00:08:28.831 SYMLINK libspdk_bdev_split.so 00:08:28.831 SYMLINK libspdk_bdev_null.so 00:08:28.831 LIB libspdk_bdev_delay.a 00:08:28.831 SO libspdk_bdev_malloc.so.6.0 00:08:28.831 SYMLINK libspdk_bdev_passthru.so 00:08:28.831 SYMLINK libspdk_bdev_ftl.so 00:08:28.831 SO libspdk_bdev_zone_block.so.6.0 00:08:28.831 SO libspdk_bdev_iscsi.so.6.0 00:08:28.831 SYMLINK libspdk_bdev_error.so 00:08:28.831 SO libspdk_bdev_delay.so.6.0 00:08:28.831 SYMLINK libspdk_bdev_aio.so 00:08:28.831 SYMLINK libspdk_bdev_malloc.so 00:08:28.831 LIB libspdk_bdev_lvol.a 00:08:28.831 SYMLINK libspdk_bdev_zone_block.so 00:08:28.831 SYMLINK libspdk_bdev_iscsi.so 00:08:28.831 SO libspdk_bdev_lvol.so.6.0 00:08:28.831 SYMLINK libspdk_bdev_delay.so 00:08:28.831 LIB libspdk_bdev_virtio.a 00:08:29.090 SO libspdk_bdev_virtio.so.6.0 00:08:29.090 SYMLINK libspdk_bdev_lvol.so 00:08:29.090 SYMLINK libspdk_bdev_virtio.so 00:08:29.350 LIB libspdk_bdev_raid.a 00:08:29.350 SO libspdk_bdev_raid.so.6.0 00:08:29.350 SYMLINK libspdk_bdev_raid.so 00:08:30.287 LIB libspdk_bdev_nvme.a 00:08:30.287 SO libspdk_bdev_nvme.so.7.1 00:08:30.287 SYMLINK libspdk_bdev_nvme.so 00:08:31.225 CC module/event/subsystems/sock/sock.o 00:08:31.225 CC module/event/subsystems/iobuf/iobuf.o 00:08:31.225 CC module/event/subsystems/keyring/keyring.o 00:08:31.225 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:31.225 CC module/event/subsystems/vmd/vmd.o 00:08:31.225 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:31.225 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:31.225 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:08:31.225 CC module/event/subsystems/scheduler/scheduler.o 00:08:31.225 CC module/event/subsystems/fsdev/fsdev.o 00:08:31.225 LIB libspdk_event_vhost_blk.a 00:08:31.225 LIB libspdk_event_scheduler.a 00:08:31.225 LIB libspdk_event_keyring.a 00:08:31.225 LIB libspdk_event_sock.a 00:08:31.225 LIB libspdk_event_fsdev.a 00:08:31.225 LIB libspdk_event_vmd.a 00:08:31.225 LIB libspdk_event_iobuf.a 00:08:31.225 LIB libspdk_event_vfu_tgt.a 00:08:31.225 SO libspdk_event_vhost_blk.so.3.0 00:08:31.225 SO libspdk_event_keyring.so.1.0 00:08:31.225 SO libspdk_event_scheduler.so.4.0 00:08:31.225 SO libspdk_event_sock.so.5.0 00:08:31.225 SO libspdk_event_fsdev.so.1.0 00:08:31.225 SO libspdk_event_vmd.so.6.0 00:08:31.225 SO libspdk_event_iobuf.so.3.0 00:08:31.225 SO libspdk_event_vfu_tgt.so.3.0 00:08:31.225 SYMLINK libspdk_event_sock.so 00:08:31.225 SYMLINK libspdk_event_vhost_blk.so 00:08:31.225 SYMLINK libspdk_event_keyring.so 00:08:31.225 SYMLINK libspdk_event_fsdev.so 00:08:31.225 SYMLINK libspdk_event_scheduler.so 00:08:31.225 SYMLINK libspdk_event_vmd.so 00:08:31.225 SYMLINK libspdk_event_iobuf.so 00:08:31.225 SYMLINK libspdk_event_vfu_tgt.so 00:08:31.485 CC module/event/subsystems/accel/accel.o 00:08:31.745 LIB libspdk_event_accel.a 00:08:31.745 SO libspdk_event_accel.so.6.0 00:08:31.745 SYMLINK libspdk_event_accel.so 00:08:32.314 CC module/event/subsystems/bdev/bdev.o 00:08:32.314 LIB libspdk_event_bdev.a 00:08:32.314 SO libspdk_event_bdev.so.6.0 00:08:32.314 SYMLINK libspdk_event_bdev.so 00:08:32.885 CC module/event/subsystems/scsi/scsi.o 00:08:32.885 CC module/event/subsystems/nbd/nbd.o 00:08:32.885 CC module/event/subsystems/ublk/ublk.o 00:08:32.885 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:32.885 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:32.885 LIB libspdk_event_scsi.a 00:08:32.885 LIB libspdk_event_nbd.a 00:08:32.885 LIB libspdk_event_ublk.a 00:08:32.885 SO libspdk_event_scsi.so.6.0 00:08:32.885 SO libspdk_event_nbd.so.6.0 00:08:32.885 SO libspdk_event_ublk.so.3.0 00:08:32.885 LIB libspdk_event_nvmf.a 00:08:32.885 SYMLINK libspdk_event_scsi.so 00:08:32.885 SYMLINK libspdk_event_nbd.so 00:08:32.885 SO libspdk_event_nvmf.so.6.0 00:08:32.885 SYMLINK libspdk_event_ublk.so 00:08:33.145 SYMLINK libspdk_event_nvmf.so 00:08:33.145 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:33.145 CC module/event/subsystems/iscsi/iscsi.o 00:08:33.405 LIB libspdk_event_vhost_scsi.a 00:08:33.405 LIB libspdk_event_iscsi.a 00:08:33.405 SO libspdk_event_vhost_scsi.so.3.0 00:08:33.405 SO libspdk_event_iscsi.so.6.0 00:08:33.405 SYMLINK libspdk_event_vhost_scsi.so 00:08:33.405 SYMLINK libspdk_event_iscsi.so 00:08:33.665 SO libspdk.so.6.0 00:08:33.665 SYMLINK libspdk.so 00:08:33.924 CC app/trace_record/trace_record.o 00:08:33.924 CC app/spdk_lspci/spdk_lspci.o 00:08:33.924 CC app/spdk_nvme_perf/perf.o 00:08:33.924 CXX app/trace/trace.o 00:08:33.924 CC app/spdk_nvme_discover/discovery_aer.o 00:08:33.924 TEST_HEADER include/spdk/assert.h 00:08:33.924 TEST_HEADER include/spdk/barrier.h 00:08:33.924 TEST_HEADER include/spdk/accel_module.h 00:08:33.924 CC app/spdk_top/spdk_top.o 00:08:33.924 TEST_HEADER include/spdk/accel.h 00:08:33.924 CC test/rpc_client/rpc_client_test.o 00:08:33.924 TEST_HEADER include/spdk/bdev_zone.h 00:08:33.924 TEST_HEADER include/spdk/bdev_module.h 00:08:33.924 TEST_HEADER include/spdk/base64.h 00:08:33.924 TEST_HEADER include/spdk/bit_array.h 00:08:33.924 TEST_HEADER include/spdk/bdev.h 00:08:33.924 TEST_HEADER include/spdk/bit_pool.h 00:08:33.924 TEST_HEADER include/spdk/blob_bdev.h 00:08:33.924 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:33.924 TEST_HEADER include/spdk/blobfs.h 00:08:33.924 TEST_HEADER include/spdk/conf.h 00:08:33.924 TEST_HEADER include/spdk/blob.h 00:08:33.924 TEST_HEADER include/spdk/config.h 00:08:33.924 CC app/spdk_nvme_identify/identify.o 00:08:33.924 TEST_HEADER include/spdk/cpuset.h 00:08:33.924 TEST_HEADER include/spdk/crc16.h 00:08:33.924 TEST_HEADER include/spdk/crc64.h 00:08:33.924 TEST_HEADER include/spdk/crc32.h 00:08:33.924 TEST_HEADER include/spdk/dif.h 00:08:33.924 TEST_HEADER include/spdk/dma.h 00:08:33.924 TEST_HEADER include/spdk/endian.h 00:08:33.924 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:33.924 TEST_HEADER include/spdk/env_dpdk.h 00:08:33.924 TEST_HEADER include/spdk/env.h 00:08:33.924 TEST_HEADER include/spdk/event.h 00:08:33.924 TEST_HEADER include/spdk/fd.h 00:08:33.924 TEST_HEADER include/spdk/fd_group.h 00:08:33.924 TEST_HEADER include/spdk/fsdev.h 00:08:33.924 TEST_HEADER include/spdk/file.h 00:08:33.924 TEST_HEADER include/spdk/fsdev_module.h 00:08:33.924 TEST_HEADER include/spdk/ftl.h 00:08:33.924 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:33.924 TEST_HEADER include/spdk/gpt_spec.h 00:08:33.924 TEST_HEADER include/spdk/hexlify.h 00:08:34.192 TEST_HEADER include/spdk/idxd.h 00:08:34.192 TEST_HEADER include/spdk/histogram_data.h 00:08:34.192 TEST_HEADER include/spdk/idxd_spec.h 00:08:34.192 TEST_HEADER include/spdk/init.h 00:08:34.192 TEST_HEADER include/spdk/ioat.h 00:08:34.192 TEST_HEADER include/spdk/iscsi_spec.h 00:08:34.193 TEST_HEADER include/spdk/ioat_spec.h 00:08:34.193 TEST_HEADER include/spdk/json.h 00:08:34.193 TEST_HEADER include/spdk/keyring.h 00:08:34.193 TEST_HEADER include/spdk/jsonrpc.h 00:08:34.193 TEST_HEADER include/spdk/keyring_module.h 00:08:34.193 CC app/nvmf_tgt/nvmf_main.o 00:08:34.193 TEST_HEADER include/spdk/lvol.h 00:08:34.193 TEST_HEADER include/spdk/likely.h 00:08:34.193 TEST_HEADER include/spdk/md5.h 00:08:34.193 TEST_HEADER include/spdk/log.h 00:08:34.193 CC app/spdk_dd/spdk_dd.o 00:08:34.193 TEST_HEADER include/spdk/nbd.h 00:08:34.193 TEST_HEADER include/spdk/mmio.h 00:08:34.193 TEST_HEADER include/spdk/memory.h 00:08:34.193 TEST_HEADER include/spdk/net.h 00:08:34.193 TEST_HEADER include/spdk/nvme.h 00:08:34.193 TEST_HEADER include/spdk/nvme_intel.h 00:08:34.193 TEST_HEADER include/spdk/notify.h 00:08:34.193 CC app/iscsi_tgt/iscsi_tgt.o 00:08:34.193 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:34.193 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:34.193 TEST_HEADER include/spdk/nvme_zns.h 00:08:34.193 TEST_HEADER include/spdk/nvme_spec.h 00:08:34.193 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:34.193 TEST_HEADER include/spdk/nvmf.h 00:08:34.193 TEST_HEADER include/spdk/nvmf_spec.h 00:08:34.193 TEST_HEADER include/spdk/nvmf_transport.h 00:08:34.193 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:34.193 TEST_HEADER include/spdk/opal_spec.h 00:08:34.193 TEST_HEADER include/spdk/opal.h 00:08:34.193 TEST_HEADER include/spdk/pci_ids.h 00:08:34.193 TEST_HEADER include/spdk/queue.h 00:08:34.193 TEST_HEADER include/spdk/pipe.h 00:08:34.193 TEST_HEADER include/spdk/rpc.h 00:08:34.193 TEST_HEADER include/spdk/scsi.h 00:08:34.193 TEST_HEADER include/spdk/scheduler.h 00:08:34.193 TEST_HEADER include/spdk/reduce.h 00:08:34.193 TEST_HEADER include/spdk/scsi_spec.h 00:08:34.193 TEST_HEADER include/spdk/sock.h 00:08:34.193 TEST_HEADER include/spdk/stdinc.h 00:08:34.193 TEST_HEADER include/spdk/string.h 00:08:34.193 TEST_HEADER include/spdk/thread.h 00:08:34.193 TEST_HEADER include/spdk/trace.h 00:08:34.193 TEST_HEADER include/spdk/trace_parser.h 00:08:34.193 TEST_HEADER include/spdk/ublk.h 00:08:34.193 TEST_HEADER include/spdk/tree.h 00:08:34.193 TEST_HEADER include/spdk/util.h 00:08:34.193 TEST_HEADER include/spdk/uuid.h 00:08:34.193 TEST_HEADER include/spdk/version.h 00:08:34.193 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:34.193 TEST_HEADER include/spdk/vmd.h 00:08:34.193 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:34.193 TEST_HEADER include/spdk/vhost.h 00:08:34.193 TEST_HEADER include/spdk/xor.h 00:08:34.193 TEST_HEADER include/spdk/zipf.h 00:08:34.193 CXX test/cpp_headers/accel.o 00:08:34.193 CXX test/cpp_headers/assert.o 00:08:34.193 CXX test/cpp_headers/accel_module.o 00:08:34.193 CXX test/cpp_headers/barrier.o 00:08:34.193 CXX test/cpp_headers/base64.o 00:08:34.193 CXX test/cpp_headers/bdev.o 00:08:34.193 CXX test/cpp_headers/bdev_module.o 00:08:34.193 CXX test/cpp_headers/bdev_zone.o 00:08:34.193 CXX test/cpp_headers/bit_pool.o 00:08:34.193 CXX test/cpp_headers/bit_array.o 00:08:34.193 CXX test/cpp_headers/blobfs_bdev.o 00:08:34.193 CXX test/cpp_headers/blobfs.o 00:08:34.193 CXX test/cpp_headers/blob.o 00:08:34.193 CXX test/cpp_headers/blob_bdev.o 00:08:34.193 CXX test/cpp_headers/conf.o 00:08:34.193 CXX test/cpp_headers/crc16.o 00:08:34.193 CXX test/cpp_headers/config.o 00:08:34.193 CXX test/cpp_headers/cpuset.o 00:08:34.193 CXX test/cpp_headers/crc32.o 00:08:34.193 CXX test/cpp_headers/crc64.o 00:08:34.193 CXX test/cpp_headers/dma.o 00:08:34.193 CXX test/cpp_headers/dif.o 00:08:34.193 CXX test/cpp_headers/env_dpdk.o 00:08:34.193 CXX test/cpp_headers/endian.o 00:08:34.193 CXX test/cpp_headers/env.o 00:08:34.193 CXX test/cpp_headers/fd_group.o 00:08:34.193 CXX test/cpp_headers/event.o 00:08:34.193 CXX test/cpp_headers/fd.o 00:08:34.193 CXX test/cpp_headers/file.o 00:08:34.193 CXX test/cpp_headers/fsdev_module.o 00:08:34.193 CXX test/cpp_headers/ftl.o 00:08:34.193 CXX test/cpp_headers/fsdev.o 00:08:34.193 CXX test/cpp_headers/fuse_dispatcher.o 00:08:34.193 CXX test/cpp_headers/gpt_spec.o 00:08:34.193 CXX test/cpp_headers/hexlify.o 00:08:34.193 CXX test/cpp_headers/histogram_data.o 00:08:34.193 CXX test/cpp_headers/idxd_spec.o 00:08:34.193 CXX test/cpp_headers/ioat.o 00:08:34.193 CXX test/cpp_headers/idxd.o 00:08:34.193 CXX test/cpp_headers/init.o 00:08:34.193 CXX test/cpp_headers/iscsi_spec.o 00:08:34.193 CXX test/cpp_headers/ioat_spec.o 00:08:34.193 CXX test/cpp_headers/json.o 00:08:34.193 CXX test/cpp_headers/jsonrpc.o 00:08:34.193 CXX test/cpp_headers/keyring.o 00:08:34.193 CXX test/cpp_headers/likely.o 00:08:34.193 CXX test/cpp_headers/keyring_module.o 00:08:34.193 CXX test/cpp_headers/log.o 00:08:34.193 CXX test/cpp_headers/lvol.o 00:08:34.193 CXX test/cpp_headers/memory.o 00:08:34.193 CXX test/cpp_headers/md5.o 00:08:34.193 CXX test/cpp_headers/mmio.o 00:08:34.193 CXX test/cpp_headers/nbd.o 00:08:34.193 CXX test/cpp_headers/notify.o 00:08:34.193 CXX test/cpp_headers/net.o 00:08:34.193 CXX test/cpp_headers/nvme.o 00:08:34.193 CXX test/cpp_headers/nvme_intel.o 00:08:34.193 CXX test/cpp_headers/nvme_spec.o 00:08:34.193 CXX test/cpp_headers/nvme_ocssd.o 00:08:34.193 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:34.193 CXX test/cpp_headers/nvme_zns.o 00:08:34.193 CXX test/cpp_headers/nvmf_cmd.o 00:08:34.193 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:34.193 CXX test/cpp_headers/nvmf.o 00:08:34.193 CXX test/cpp_headers/nvmf_spec.o 00:08:34.193 CXX test/cpp_headers/nvmf_transport.o 00:08:34.193 CC app/spdk_tgt/spdk_tgt.o 00:08:34.193 CXX test/cpp_headers/opal.o 00:08:34.193 CC examples/ioat/perf/perf.o 00:08:34.193 CC examples/util/zipf/zipf.o 00:08:34.193 CC examples/ioat/verify/verify.o 00:08:34.193 CC test/app/histogram_perf/histogram_perf.o 00:08:34.193 CC test/thread/poller_perf/poller_perf.o 00:08:34.193 CC test/app/jsoncat/jsoncat.o 00:08:34.193 CXX test/cpp_headers/opal_spec.o 00:08:34.193 CC app/fio/nvme/fio_plugin.o 00:08:34.193 CC test/app/stub/stub.o 00:08:34.193 CC test/env/vtophys/vtophys.o 00:08:34.193 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:34.193 CC test/env/pci/pci_ut.o 00:08:34.193 CC test/app/bdev_svc/bdev_svc.o 00:08:34.193 CC test/env/memory/memory_ut.o 00:08:34.463 CC test/dma/test_dma/test_dma.o 00:08:34.463 CC app/fio/bdev/fio_plugin.o 00:08:34.463 LINK rpc_client_test 00:08:34.463 LINK interrupt_tgt 00:08:34.463 LINK spdk_nvme_discover 00:08:34.463 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:34.463 LINK spdk_lspci 00:08:34.733 LINK nvmf_tgt 00:08:34.733 LINK zipf 00:08:34.733 LINK jsoncat 00:08:34.733 LINK poller_perf 00:08:34.733 CXX test/cpp_headers/pci_ids.o 00:08:34.733 CC test/env/mem_callbacks/mem_callbacks.o 00:08:34.733 LINK vtophys 00:08:34.733 CXX test/cpp_headers/pipe.o 00:08:34.733 LINK iscsi_tgt 00:08:34.733 LINK stub 00:08:34.733 CXX test/cpp_headers/queue.o 00:08:34.733 CXX test/cpp_headers/reduce.o 00:08:34.733 CXX test/cpp_headers/rpc.o 00:08:34.733 LINK env_dpdk_post_init 00:08:34.733 CXX test/cpp_headers/scheduler.o 00:08:34.733 CXX test/cpp_headers/scsi.o 00:08:34.733 CXX test/cpp_headers/scsi_spec.o 00:08:34.733 CXX test/cpp_headers/sock.o 00:08:34.733 CXX test/cpp_headers/stdinc.o 00:08:34.733 CXX test/cpp_headers/string.o 00:08:34.733 CXX test/cpp_headers/thread.o 00:08:34.733 CXX test/cpp_headers/trace.o 00:08:34.733 CXX test/cpp_headers/trace_parser.o 00:08:34.733 CXX test/cpp_headers/tree.o 00:08:34.733 CXX test/cpp_headers/ublk.o 00:08:34.733 CXX test/cpp_headers/util.o 00:08:34.733 CXX test/cpp_headers/uuid.o 00:08:34.733 LINK spdk_trace_record 00:08:34.733 LINK verify 00:08:34.733 CXX test/cpp_headers/vfio_user_pci.o 00:08:34.733 CXX test/cpp_headers/version.o 00:08:34.733 CXX test/cpp_headers/vfio_user_spec.o 00:08:34.733 CXX test/cpp_headers/vmd.o 00:08:34.733 CXX test/cpp_headers/vhost.o 00:08:34.733 CXX test/cpp_headers/xor.o 00:08:34.733 LINK histogram_perf 00:08:34.733 CXX test/cpp_headers/zipf.o 00:08:34.991 LINK spdk_dd 00:08:34.991 LINK bdev_svc 00:08:34.991 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:34.991 LINK ioat_perf 00:08:34.991 LINK spdk_tgt 00:08:34.991 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:34.991 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:34.991 LINK spdk_trace 00:08:34.991 LINK pci_ut 00:08:35.250 LINK nvme_fuzz 00:08:35.250 CC examples/sock/hello_world/hello_sock.o 00:08:35.250 CC test/event/event_perf/event_perf.o 00:08:35.250 CC examples/idxd/perf/perf.o 00:08:35.250 CC examples/vmd/led/led.o 00:08:35.250 CC test/event/reactor/reactor.o 00:08:35.250 CC examples/vmd/lsvmd/lsvmd.o 00:08:35.250 CC test/event/reactor_perf/reactor_perf.o 00:08:35.250 LINK test_dma 00:08:35.250 CC examples/thread/thread/thread_ex.o 00:08:35.250 CC test/event/app_repeat/app_repeat.o 00:08:35.250 CC test/event/scheduler/scheduler.o 00:08:35.250 LINK spdk_nvme_identify 00:08:35.250 LINK spdk_nvme 00:08:35.250 LINK event_perf 00:08:35.250 LINK led 00:08:35.250 LINK vhost_fuzz 00:08:35.250 LINK reactor_perf 00:08:35.250 LINK lsvmd 00:08:35.250 LINK reactor 00:08:35.250 LINK spdk_bdev 00:08:35.250 CC app/vhost/vhost.o 00:08:35.509 LINK spdk_top 00:08:35.509 LINK spdk_nvme_perf 00:08:35.509 LINK hello_sock 00:08:35.509 LINK app_repeat 00:08:35.509 LINK mem_callbacks 00:08:35.509 LINK thread 00:08:35.509 LINK scheduler 00:08:35.509 LINK idxd_perf 00:08:35.509 LINK vhost 00:08:35.768 CC test/nvme/overhead/overhead.o 00:08:35.768 CC test/nvme/boot_partition/boot_partition.o 00:08:35.768 CC test/nvme/cuse/cuse.o 00:08:35.768 CC test/nvme/simple_copy/simple_copy.o 00:08:35.768 CC test/nvme/connect_stress/connect_stress.o 00:08:35.768 CC test/nvme/aer/aer.o 00:08:35.768 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:35.768 CC test/nvme/err_injection/err_injection.o 00:08:35.768 CC test/nvme/e2edp/nvme_dp.o 00:08:35.768 CC test/nvme/sgl/sgl.o 00:08:35.768 CC test/nvme/startup/startup.o 00:08:35.768 CC test/nvme/compliance/nvme_compliance.o 00:08:35.768 CC test/nvme/fused_ordering/fused_ordering.o 00:08:35.768 CC test/nvme/reserve/reserve.o 00:08:35.768 CC test/nvme/reset/reset.o 00:08:35.768 CC test/nvme/fdp/fdp.o 00:08:35.768 CC test/accel/dif/dif.o 00:08:35.768 CC test/blobfs/mkfs/mkfs.o 00:08:35.768 LINK memory_ut 00:08:35.768 CC test/lvol/esnap/esnap.o 00:08:35.768 CC examples/nvme/reconnect/reconnect.o 00:08:35.768 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:35.768 CC examples/nvme/arbitration/arbitration.o 00:08:35.768 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:35.768 CC examples/nvme/hello_world/hello_world.o 00:08:35.768 CC examples/nvme/abort/abort.o 00:08:35.768 CC examples/nvme/hotplug/hotplug.o 00:08:35.768 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:36.026 LINK connect_stress 00:08:36.026 LINK startup 00:08:36.026 LINK boot_partition 00:08:36.026 LINK err_injection 00:08:36.026 LINK doorbell_aers 00:08:36.026 CC examples/accel/perf/accel_perf.o 00:08:36.026 LINK reserve 00:08:36.026 LINK simple_copy 00:08:36.026 LINK fused_ordering 00:08:36.026 LINK overhead 00:08:36.026 LINK sgl 00:08:36.026 CC examples/blob/cli/blobcli.o 00:08:36.026 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:36.026 LINK nvme_dp 00:08:36.026 LINK reset 00:08:36.026 LINK mkfs 00:08:36.026 LINK aer 00:08:36.026 CC examples/blob/hello_world/hello_blob.o 00:08:36.026 LINK nvme_compliance 00:08:36.026 LINK pmr_persistence 00:08:36.026 LINK fdp 00:08:36.026 LINK cmb_copy 00:08:36.026 LINK hello_world 00:08:36.026 LINK hotplug 00:08:36.285 LINK reconnect 00:08:36.285 LINK arbitration 00:08:36.285 LINK abort 00:08:36.285 LINK hello_fsdev 00:08:36.285 LINK hello_blob 00:08:36.285 LINK dif 00:08:36.285 LINK nvme_manage 00:08:36.285 LINK accel_perf 00:08:36.285 LINK iscsi_fuzz 00:08:36.285 LINK blobcli 00:08:36.853 LINK cuse 00:08:36.853 CC test/bdev/bdevio/bdevio.o 00:08:36.853 CC examples/bdev/hello_world/hello_bdev.o 00:08:36.853 CC examples/bdev/bdevperf/bdevperf.o 00:08:37.113 LINK hello_bdev 00:08:37.113 LINK bdevio 00:08:37.372 LINK bdevperf 00:08:37.940 CC examples/nvmf/nvmf/nvmf.o 00:08:38.200 LINK nvmf 00:08:39.581 LINK esnap 00:08:39.581 00:08:39.581 real 0m55.575s 00:08:39.581 user 7m59.152s 00:08:39.581 sys 3m41.568s 00:08:39.581 23:50:14 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:39.581 23:50:14 make -- common/autotest_common.sh@10 -- $ set +x 00:08:39.581 ************************************ 00:08:39.581 END TEST make 00:08:39.581 ************************************ 00:08:39.581 23:50:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:39.581 23:50:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:39.581 23:50:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:39.582 23:50:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:39.582 23:50:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-cpu-load.pid ]] 00:08:39.582 23:50:14 -- pm/common@44 -- $ pid=93400 00:08:39.582 23:50:14 -- pm/common@50 -- $ kill -TERM 93400 00:08:39.582 23:50:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:39.582 23:50:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-vmstat.pid ]] 00:08:39.582 23:50:14 -- pm/common@44 -- $ pid=93401 00:08:39.582 23:50:14 -- pm/common@50 -- $ kill -TERM 93401 00:08:39.582 23:50:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:39.582 23:50:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:39.582 23:50:14 -- pm/common@44 -- $ pid=93403 00:08:39.582 23:50:14 -- pm/common@50 -- $ kill -TERM 93403 00:08:39.582 23:50:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:39.582 23:50:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:39.582 23:50:14 -- pm/common@44 -- $ pid=93426 00:08:39.582 23:50:14 -- pm/common@50 -- $ sudo -E kill -TERM 93426 00:08:39.842 23:50:14 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:39.842 23:50:14 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:08:39.842 23:50:14 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:39.842 23:50:14 -- common/autotest_common.sh@1711 -- # lcov --version 00:08:39.842 23:50:14 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:39.842 23:50:14 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:39.842 23:50:14 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.842 23:50:14 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.842 23:50:14 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.842 23:50:14 -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.842 23:50:14 -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.842 23:50:14 -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.842 23:50:14 -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.842 23:50:14 -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.842 23:50:14 -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.842 23:50:14 -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.842 23:50:14 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.842 23:50:14 -- scripts/common.sh@344 -- # case "$op" in 00:08:39.842 23:50:14 -- scripts/common.sh@345 -- # : 1 00:08:39.842 23:50:14 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.842 23:50:14 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.842 23:50:14 -- scripts/common.sh@365 -- # decimal 1 00:08:39.842 23:50:14 -- scripts/common.sh@353 -- # local d=1 00:08:39.842 23:50:14 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.842 23:50:14 -- scripts/common.sh@355 -- # echo 1 00:08:39.842 23:50:14 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.842 23:50:14 -- scripts/common.sh@366 -- # decimal 2 00:08:39.842 23:50:14 -- scripts/common.sh@353 -- # local d=2 00:08:39.842 23:50:14 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.842 23:50:14 -- scripts/common.sh@355 -- # echo 2 00:08:39.842 23:50:14 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.842 23:50:14 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.842 23:50:14 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.842 23:50:14 -- scripts/common.sh@368 -- # return 0 00:08:39.842 23:50:14 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.842 23:50:14 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:39.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.842 --rc genhtml_branch_coverage=1 00:08:39.842 --rc genhtml_function_coverage=1 00:08:39.842 --rc genhtml_legend=1 00:08:39.842 --rc geninfo_all_blocks=1 00:08:39.842 --rc geninfo_unexecuted_blocks=1 00:08:39.842 00:08:39.842 ' 00:08:39.842 23:50:14 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:39.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.842 --rc genhtml_branch_coverage=1 00:08:39.842 --rc genhtml_function_coverage=1 00:08:39.842 --rc genhtml_legend=1 00:08:39.842 --rc geninfo_all_blocks=1 00:08:39.842 --rc geninfo_unexecuted_blocks=1 00:08:39.842 00:08:39.842 ' 00:08:39.842 23:50:14 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:39.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.842 --rc genhtml_branch_coverage=1 00:08:39.842 --rc genhtml_function_coverage=1 00:08:39.842 --rc genhtml_legend=1 00:08:39.842 --rc geninfo_all_blocks=1 00:08:39.842 --rc geninfo_unexecuted_blocks=1 00:08:39.842 00:08:39.842 ' 00:08:39.842 23:50:14 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:39.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.842 --rc genhtml_branch_coverage=1 00:08:39.842 --rc genhtml_function_coverage=1 00:08:39.842 --rc genhtml_legend=1 00:08:39.842 --rc geninfo_all_blocks=1 00:08:39.842 --rc geninfo_unexecuted_blocks=1 00:08:39.842 00:08:39.842 ' 00:08:39.842 23:50:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:08:39.842 23:50:14 -- nvmf/common.sh@7 -- # uname -s 00:08:39.842 23:50:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.842 23:50:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.842 23:50:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.842 23:50:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.842 23:50:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.842 23:50:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.842 23:50:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.842 23:50:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.842 23:50:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.842 23:50:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.842 23:50:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:39.842 23:50:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:39.842 23:50:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.842 23:50:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.842 23:50:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.842 23:50:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.842 23:50:14 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:08:39.842 23:50:14 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.842 23:50:14 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.842 23:50:14 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.842 23:50:14 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.842 23:50:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.843 23:50:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.843 23:50:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.843 23:50:14 -- paths/export.sh@5 -- # export PATH 00:08:39.843 23:50:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.843 23:50:14 -- nvmf/common.sh@51 -- # : 0 00:08:39.843 23:50:14 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.843 23:50:14 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.843 23:50:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.843 23:50:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.843 23:50:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.843 23:50:14 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.843 23:50:14 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.843 23:50:14 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.843 23:50:14 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.843 23:50:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:39.843 23:50:14 -- spdk/autotest.sh@32 -- # uname -s 00:08:40.103 23:50:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:40.103 23:50:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:40.103 23:50:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/coredumps 00:08:40.103 23:50:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/core-collector.sh %P %s %t' 00:08:40.103 23:50:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/coredumps 00:08:40.103 23:50:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:40.103 23:50:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:40.103 23:50:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:40.103 23:50:14 -- spdk/autotest.sh@48 -- # udevadm_pid=156394 00:08:40.103 23:50:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:40.103 23:50:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:40.103 23:50:14 -- pm/common@17 -- # local monitor 00:08:40.103 23:50:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.103 23:50:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.103 23:50:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.103 23:50:14 -- pm/common@21 -- # date +%s 00:08:40.103 23:50:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:40.103 23:50:14 -- pm/common@21 -- # date +%s 00:08:40.103 23:50:14 -- pm/common@21 -- # date +%s 00:08:40.103 23:50:14 -- pm/common@25 -- # sleep 1 00:08:40.103 23:50:14 -- pm/common@21 -- # date +%s 00:08:40.103 23:50:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733784614 00:08:40.103 23:50:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733784614 00:08:40.103 23:50:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733784614 00:08:40.103 23:50:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733784614 00:08:40.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733784614_collect-cpu-temp.pm.log 00:08:40.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733784614_collect-vmstat.pm.log 00:08:40.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733784614_collect-cpu-load.pm.log 00:08:40.103 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733784614_collect-bmc-pm.bmc.pm.log 00:08:41.040 23:50:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:41.040 23:50:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:41.040 23:50:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:41.040 23:50:15 -- common/autotest_common.sh@10 -- # set +x 00:08:41.040 23:50:15 -- spdk/autotest.sh@59 -- # create_test_list 00:08:41.040 23:50:15 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:41.040 23:50:15 -- common/autotest_common.sh@10 -- # set +x 00:08:41.040 23:50:15 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autotest.sh 00:08:41.040 23:50:15 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:08:41.040 23:50:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:08:41.040 23:50:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output 00:08:41.040 23:50:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:08:41.040 23:50:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:41.040 23:50:15 -- common/autotest_common.sh@1457 -- # uname 00:08:41.040 23:50:15 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:41.040 23:50:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:41.040 23:50:15 -- common/autotest_common.sh@1477 -- # uname 00:08:41.040 23:50:15 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:41.040 23:50:15 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:41.040 23:50:15 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:41.040 lcov: LCOV version 1.15 00:08:41.040 23:50:15 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_base.info 00:08:53.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:53.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/nvme/nvme_stubs.gcno 00:09:08.158 23:50:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:08.158 23:50:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.158 23:50:40 -- common/autotest_common.sh@10 -- # set +x 00:09:08.158 23:50:41 -- spdk/autotest.sh@78 -- # rm -f 00:09:08.158 23:50:41 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:09:09.098 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:09:09.098 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:09:09.098 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:09:09.098 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:09:09.098 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:09:09.098 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:09:09.098 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:09:09.098 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:09:09.098 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:09:09.098 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:09:09.098 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:09:09.098 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:09:09.098 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:09:09.358 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:09:09.358 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:09:09.358 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:09:09.358 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:09:09.358 23:50:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:09.358 23:50:44 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:09.358 23:50:44 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:09.358 23:50:44 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:09:09.358 23:50:44 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:09:09.358 23:50:44 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:09:09.358 23:50:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:09.358 23:50:44 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:09:09.358 23:50:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:09.358 23:50:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:09:09.358 23:50:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:09.358 23:50:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:09.358 23:50:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:09.358 23:50:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:09.358 23:50:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:09.358 23:50:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:09.358 23:50:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:09.358 23:50:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:09.358 23:50:44 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:09.358 No valid GPT data, bailing 00:09:09.358 23:50:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:09.358 23:50:44 -- scripts/common.sh@394 -- # pt= 00:09:09.358 23:50:44 -- scripts/common.sh@395 -- # return 1 00:09:09.358 23:50:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:09.358 1+0 records in 00:09:09.358 1+0 records out 00:09:09.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00561731 s, 187 MB/s 00:09:09.358 23:50:44 -- spdk/autotest.sh@105 -- # sync 00:09:09.358 23:50:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:09.358 23:50:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:09.358 23:50:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:15.937 23:50:49 -- spdk/autotest.sh@111 -- # uname -s 00:09:15.937 23:50:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:15.937 23:50:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:15.937 23:50:49 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh status 00:09:17.845 Hugepages 00:09:17.845 node hugesize free / total 00:09:17.845 node0 1048576kB 0 / 0 00:09:17.845 node0 2048kB 0 / 0 00:09:17.845 node1 1048576kB 0 / 0 00:09:17.845 node1 2048kB 0 / 0 00:09:17.845 00:09:17.845 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:17.845 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:09:17.845 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:09:17.845 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:09:17.845 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:09:17.845 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:09:17.845 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:09:17.845 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:09:17.845 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:09:17.845 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:09:17.845 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:09:17.845 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:09:17.845 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:09:17.845 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:09:17.845 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:09:17.845 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:09:17.845 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:09:17.845 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:09:17.845 23:50:52 -- spdk/autotest.sh@117 -- # uname -s 00:09:17.845 23:50:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:17.845 23:50:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:17.845 23:50:52 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:09:21.140 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:21.140 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:21.711 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:09:21.711 23:50:56 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:23.091 23:50:57 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:23.091 23:50:57 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:23.091 23:50:57 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:23.091 23:50:57 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:23.091 23:50:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:23.091 23:50:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:23.091 23:50:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:23.091 23:50:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:09:23.091 23:50:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:23.091 23:50:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:09:23.091 23:50:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:09:23.091 23:50:57 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:09:25.631 Waiting for block devices as requested 00:09:25.631 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:09:25.890 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:25.890 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:25.890 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:26.149 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:26.149 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:26.149 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:26.149 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:26.409 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:26.409 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:26.409 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:26.669 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:26.669 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:26.669 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:26.669 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:26.929 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:26.929 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:26.929 23:51:01 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:26.929 23:51:01 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:09:26.929 23:51:01 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:09:26.929 23:51:01 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:09:26.929 23:51:01 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:09:26.929 23:51:01 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:09:26.929 23:51:01 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:09:26.929 23:51:01 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:26.929 23:51:01 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:26.929 23:51:01 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:26.929 23:51:01 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:26.929 23:51:01 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:26.929 23:51:01 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:26.929 23:51:01 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:09:26.929 23:51:01 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:26.929 23:51:01 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:26.929 23:51:01 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:26.929 23:51:01 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:27.189 23:51:01 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:27.189 23:51:01 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:27.189 23:51:01 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:27.189 23:51:01 -- common/autotest_common.sh@1543 -- # continue 00:09:27.189 23:51:01 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:27.189 23:51:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.189 23:51:01 -- common/autotest_common.sh@10 -- # set +x 00:09:27.189 23:51:01 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:27.189 23:51:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.189 23:51:01 -- common/autotest_common.sh@10 -- # set +x 00:09:27.189 23:51:01 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:09:30.484 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:30.484 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:30.743 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:09:31.002 23:51:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:31.002 23:51:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.002 23:51:05 -- common/autotest_common.sh@10 -- # set +x 00:09:31.002 23:51:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:31.002 23:51:05 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:31.002 23:51:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:31.002 23:51:05 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:31.002 23:51:05 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:31.002 23:51:05 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:31.002 23:51:05 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:31.002 23:51:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:31.003 23:51:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:31.003 23:51:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:31.003 23:51:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:31.003 23:51:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:09:31.003 23:51:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:31.003 23:51:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:09:31.003 23:51:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:09:31.003 23:51:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:31.003 23:51:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:09:31.003 23:51:05 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:09:31.003 23:51:05 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:09:31.003 23:51:05 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:09:31.003 23:51:05 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:09:31.003 23:51:05 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:09:31.263 23:51:05 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:09:31.263 23:51:05 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=171214 00:09:31.263 23:51:05 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:09:31.263 23:51:05 -- common/autotest_common.sh@1585 -- # waitforlisten 171214 00:09:31.263 23:51:05 -- common/autotest_common.sh@835 -- # '[' -z 171214 ']' 00:09:31.263 23:51:05 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.263 23:51:05 -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.263 23:51:05 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.263 23:51:05 -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.263 23:51:05 -- common/autotest_common.sh@10 -- # set +x 00:09:31.263 [2024-12-09 23:51:06.000971] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:09:31.263 [2024-12-09 23:51:06.001022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171214 ] 00:09:31.263 [2024-12-09 23:51:06.078888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.263 [2024-12-09 23:51:06.121822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.203 23:51:06 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.203 23:51:06 -- common/autotest_common.sh@868 -- # return 0 00:09:32.203 23:51:06 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:09:32.203 23:51:06 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:09:32.203 23:51:06 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:09:35.496 nvme0n1 00:09:35.496 23:51:09 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:09:35.496 [2024-12-09 23:51:10.016762] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:09:35.496 request: 00:09:35.496 { 00:09:35.496 "nvme_ctrlr_name": "nvme0", 00:09:35.496 "password": "test", 00:09:35.496 "method": "bdev_nvme_opal_revert", 00:09:35.496 "req_id": 1 00:09:35.496 } 00:09:35.496 Got JSON-RPC error response 00:09:35.496 response: 00:09:35.496 { 00:09:35.496 "code": -32602, 00:09:35.496 "message": "Invalid parameters" 00:09:35.496 } 00:09:35.496 23:51:10 -- common/autotest_common.sh@1591 -- # true 00:09:35.496 23:51:10 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:09:35.496 23:51:10 -- common/autotest_common.sh@1595 -- # killprocess 171214 00:09:35.496 23:51:10 -- common/autotest_common.sh@954 -- # '[' -z 171214 ']' 00:09:35.496 23:51:10 -- common/autotest_common.sh@958 -- # kill -0 171214 00:09:35.496 23:51:10 -- common/autotest_common.sh@959 -- # uname 00:09:35.496 23:51:10 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.496 23:51:10 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171214 00:09:35.496 23:51:10 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.496 23:51:10 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.496 23:51:10 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171214' 00:09:35.496 killing process with pid 171214 00:09:35.496 23:51:10 -- common/autotest_common.sh@973 -- # kill 171214 00:09:35.496 23:51:10 -- common/autotest_common.sh@978 -- # wait 171214 00:09:36.876 23:51:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:36.876 23:51:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:36.876 23:51:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:36.876 23:51:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:36.876 23:51:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:36.876 23:51:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:36.876 23:51:11 -- common/autotest_common.sh@10 -- # set +x 00:09:36.876 23:51:11 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:36.876 23:51:11 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env.sh 00:09:36.876 23:51:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.876 23:51:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.876 23:51:11 -- common/autotest_common.sh@10 -- # set +x 00:09:36.876 ************************************ 00:09:36.876 START TEST env 00:09:36.876 ************************************ 00:09:36.876 23:51:11 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env.sh 00:09:37.136 * Looking for test storage... 00:09:37.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env 00:09:37.136 23:51:11 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.136 23:51:11 env -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.136 23:51:11 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.136 23:51:11 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.136 23:51:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.136 23:51:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.136 23:51:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.136 23:51:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.136 23:51:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.136 23:51:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.136 23:51:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.136 23:51:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.136 23:51:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.136 23:51:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.136 23:51:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.136 23:51:11 env -- scripts/common.sh@344 -- # case "$op" in 00:09:37.136 23:51:11 env -- scripts/common.sh@345 -- # : 1 00:09:37.136 23:51:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.136 23:51:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.136 23:51:11 env -- scripts/common.sh@365 -- # decimal 1 00:09:37.136 23:51:11 env -- scripts/common.sh@353 -- # local d=1 00:09:37.136 23:51:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.136 23:51:11 env -- scripts/common.sh@355 -- # echo 1 00:09:37.136 23:51:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.136 23:51:11 env -- scripts/common.sh@366 -- # decimal 2 00:09:37.136 23:51:11 env -- scripts/common.sh@353 -- # local d=2 00:09:37.136 23:51:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.136 23:51:11 env -- scripts/common.sh@355 -- # echo 2 00:09:37.136 23:51:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.136 23:51:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.136 23:51:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.136 23:51:11 env -- scripts/common.sh@368 -- # return 0 00:09:37.136 23:51:11 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.136 23:51:11 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.136 --rc genhtml_branch_coverage=1 00:09:37.136 --rc genhtml_function_coverage=1 00:09:37.136 --rc genhtml_legend=1 00:09:37.136 --rc geninfo_all_blocks=1 00:09:37.136 --rc geninfo_unexecuted_blocks=1 00:09:37.136 00:09:37.136 ' 00:09:37.136 23:51:11 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.136 --rc genhtml_branch_coverage=1 00:09:37.136 --rc genhtml_function_coverage=1 00:09:37.136 --rc genhtml_legend=1 00:09:37.136 --rc geninfo_all_blocks=1 00:09:37.136 --rc geninfo_unexecuted_blocks=1 00:09:37.136 00:09:37.136 ' 00:09:37.136 23:51:11 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.136 --rc genhtml_branch_coverage=1 00:09:37.136 --rc genhtml_function_coverage=1 00:09:37.136 --rc genhtml_legend=1 00:09:37.136 --rc geninfo_all_blocks=1 00:09:37.136 --rc geninfo_unexecuted_blocks=1 00:09:37.136 00:09:37.136 ' 00:09:37.136 23:51:11 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.136 --rc genhtml_branch_coverage=1 00:09:37.136 --rc genhtml_function_coverage=1 00:09:37.136 --rc genhtml_legend=1 00:09:37.136 --rc geninfo_all_blocks=1 00:09:37.136 --rc geninfo_unexecuted_blocks=1 00:09:37.136 00:09:37.136 ' 00:09:37.136 23:51:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/memory/memory_ut 00:09:37.136 23:51:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.136 23:51:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.136 23:51:11 env -- common/autotest_common.sh@10 -- # set +x 00:09:37.136 ************************************ 00:09:37.136 START TEST env_memory 00:09:37.136 ************************************ 00:09:37.136 23:51:11 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/memory/memory_ut 00:09:37.136 00:09:37.136 00:09:37.136 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.136 http://cunit.sourceforge.net/ 00:09:37.136 00:09:37.136 00:09:37.136 Suite: memory 00:09:37.136 Test: alloc and free memory map ...[2024-12-09 23:51:11.977226] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:37.136 passed 00:09:37.136 Test: mem map translation ...[2024-12-09 23:51:11.996459] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:37.136 [2024-12-09 23:51:11.996473] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:37.136 [2024-12-09 23:51:11.996507] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:37.136 [2024-12-09 23:51:11.996513] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:37.136 passed 00:09:37.136 Test: mem map registration ...[2024-12-09 23:51:12.034352] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:37.136 [2024-12-09 23:51:12.034368] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:37.136 passed 00:09:37.397 Test: mem map adjacent registrations ...passed 00:09:37.397 00:09:37.397 Run Summary: Type Total Ran Passed Failed Inactive 00:09:37.397 suites 1 1 n/a 0 0 00:09:37.397 tests 4 4 4 0 0 00:09:37.397 asserts 152 152 152 0 n/a 00:09:37.397 00:09:37.397 Elapsed time = 0.128 seconds 00:09:37.397 00:09:37.397 real 0m0.137s 00:09:37.397 user 0m0.128s 00:09:37.397 sys 0m0.008s 00:09:37.397 23:51:12 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.397 23:51:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:37.397 ************************************ 00:09:37.397 END TEST env_memory 00:09:37.397 ************************************ 00:09:37.397 23:51:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/vtophys/vtophys 00:09:37.397 23:51:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.397 23:51:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.397 23:51:12 env -- common/autotest_common.sh@10 -- # set +x 00:09:37.397 ************************************ 00:09:37.397 START TEST env_vtophys 00:09:37.397 ************************************ 00:09:37.397 23:51:12 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/vtophys/vtophys 00:09:37.397 EAL: lib.eal log level changed from notice to debug 00:09:37.397 EAL: Detected lcore 0 as core 0 on socket 0 00:09:37.397 EAL: Detected lcore 1 as core 1 on socket 0 00:09:37.397 EAL: Detected lcore 2 as core 2 on socket 0 00:09:37.397 EAL: Detected lcore 3 as core 3 on socket 0 00:09:37.397 EAL: Detected lcore 4 as core 4 on socket 0 00:09:37.397 EAL: Detected lcore 5 as core 5 on socket 0 00:09:37.397 EAL: Detected lcore 6 as core 6 on socket 0 00:09:37.397 EAL: Detected lcore 7 as core 8 on socket 0 00:09:37.397 EAL: Detected lcore 8 as core 9 on socket 0 00:09:37.397 EAL: Detected lcore 9 as core 10 on socket 0 00:09:37.397 EAL: Detected lcore 10 as core 11 on socket 0 00:09:37.397 EAL: Detected lcore 11 as core 12 on socket 0 00:09:37.397 EAL: Detected lcore 12 as core 13 on socket 0 00:09:37.397 EAL: Detected lcore 13 as core 16 on socket 0 00:09:37.397 EAL: Detected lcore 14 as core 17 on socket 0 00:09:37.397 EAL: Detected lcore 15 as core 18 on socket 0 00:09:37.397 EAL: Detected lcore 16 as core 19 on socket 0 00:09:37.397 EAL: Detected lcore 17 as core 20 on socket 0 00:09:37.397 EAL: Detected lcore 18 as core 21 on socket 0 00:09:37.397 EAL: Detected lcore 19 as core 25 on socket 0 00:09:37.397 EAL: Detected lcore 20 as core 26 on socket 0 00:09:37.397 EAL: Detected lcore 21 as core 27 on socket 0 00:09:37.397 EAL: Detected lcore 22 as core 28 on socket 0 00:09:37.397 EAL: Detected lcore 23 as core 29 on socket 0 00:09:37.397 EAL: Detected lcore 24 as core 0 on socket 1 00:09:37.397 EAL: Detected lcore 25 as core 1 on socket 1 00:09:37.397 EAL: Detected lcore 26 as core 2 on socket 1 00:09:37.397 EAL: Detected lcore 27 as core 3 on socket 1 00:09:37.397 EAL: Detected lcore 28 as core 4 on socket 1 00:09:37.397 EAL: Detected lcore 29 as core 5 on socket 1 00:09:37.397 EAL: Detected lcore 30 as core 6 on socket 1 00:09:37.397 EAL: Detected lcore 31 as core 9 on socket 1 00:09:37.397 EAL: Detected lcore 32 as core 10 on socket 1 00:09:37.397 EAL: Detected lcore 33 as core 11 on socket 1 00:09:37.397 EAL: Detected lcore 34 as core 12 on socket 1 00:09:37.397 EAL: Detected lcore 35 as core 13 on socket 1 00:09:37.397 EAL: Detected lcore 36 as core 16 on socket 1 00:09:37.397 EAL: Detected lcore 37 as core 17 on socket 1 00:09:37.397 EAL: Detected lcore 38 as core 18 on socket 1 00:09:37.397 EAL: Detected lcore 39 as core 19 on socket 1 00:09:37.397 EAL: Detected lcore 40 as core 20 on socket 1 00:09:37.397 EAL: Detected lcore 41 as core 21 on socket 1 00:09:37.397 EAL: Detected lcore 42 as core 24 on socket 1 00:09:37.397 EAL: Detected lcore 43 as core 25 on socket 1 00:09:37.397 EAL: Detected lcore 44 as core 26 on socket 1 00:09:37.397 EAL: Detected lcore 45 as core 27 on socket 1 00:09:37.397 EAL: Detected lcore 46 as core 28 on socket 1 00:09:37.397 EAL: Detected lcore 47 as core 29 on socket 1 00:09:37.397 EAL: Detected lcore 48 as core 0 on socket 0 00:09:37.397 EAL: Detected lcore 49 as core 1 on socket 0 00:09:37.397 EAL: Detected lcore 50 as core 2 on socket 0 00:09:37.397 EAL: Detected lcore 51 as core 3 on socket 0 00:09:37.397 EAL: Detected lcore 52 as core 4 on socket 0 00:09:37.397 EAL: Detected lcore 53 as core 5 on socket 0 00:09:37.397 EAL: Detected lcore 54 as core 6 on socket 0 00:09:37.397 EAL: Detected lcore 55 as core 8 on socket 0 00:09:37.397 EAL: Detected lcore 56 as core 9 on socket 0 00:09:37.397 EAL: Detected lcore 57 as core 10 on socket 0 00:09:37.397 EAL: Detected lcore 58 as core 11 on socket 0 00:09:37.397 EAL: Detected lcore 59 as core 12 on socket 0 00:09:37.397 EAL: Detected lcore 60 as core 13 on socket 0 00:09:37.397 EAL: Detected lcore 61 as core 16 on socket 0 00:09:37.397 EAL: Detected lcore 62 as core 17 on socket 0 00:09:37.397 EAL: Detected lcore 63 as core 18 on socket 0 00:09:37.397 EAL: Detected lcore 64 as core 19 on socket 0 00:09:37.397 EAL: Detected lcore 65 as core 20 on socket 0 00:09:37.397 EAL: Detected lcore 66 as core 21 on socket 0 00:09:37.397 EAL: Detected lcore 67 as core 25 on socket 0 00:09:37.397 EAL: Detected lcore 68 as core 26 on socket 0 00:09:37.397 EAL: Detected lcore 69 as core 27 on socket 0 00:09:37.397 EAL: Detected lcore 70 as core 28 on socket 0 00:09:37.397 EAL: Detected lcore 71 as core 29 on socket 0 00:09:37.397 EAL: Detected lcore 72 as core 0 on socket 1 00:09:37.397 EAL: Detected lcore 73 as core 1 on socket 1 00:09:37.397 EAL: Detected lcore 74 as core 2 on socket 1 00:09:37.397 EAL: Detected lcore 75 as core 3 on socket 1 00:09:37.397 EAL: Detected lcore 76 as core 4 on socket 1 00:09:37.397 EAL: Detected lcore 77 as core 5 on socket 1 00:09:37.397 EAL: Detected lcore 78 as core 6 on socket 1 00:09:37.397 EAL: Detected lcore 79 as core 9 on socket 1 00:09:37.397 EAL: Detected lcore 80 as core 10 on socket 1 00:09:37.397 EAL: Detected lcore 81 as core 11 on socket 1 00:09:37.397 EAL: Detected lcore 82 as core 12 on socket 1 00:09:37.397 EAL: Detected lcore 83 as core 13 on socket 1 00:09:37.397 EAL: Detected lcore 84 as core 16 on socket 1 00:09:37.397 EAL: Detected lcore 85 as core 17 on socket 1 00:09:37.397 EAL: Detected lcore 86 as core 18 on socket 1 00:09:37.397 EAL: Detected lcore 87 as core 19 on socket 1 00:09:37.397 EAL: Detected lcore 88 as core 20 on socket 1 00:09:37.397 EAL: Detected lcore 89 as core 21 on socket 1 00:09:37.397 EAL: Detected lcore 90 as core 24 on socket 1 00:09:37.397 EAL: Detected lcore 91 as core 25 on socket 1 00:09:37.397 EAL: Detected lcore 92 as core 26 on socket 1 00:09:37.397 EAL: Detected lcore 93 as core 27 on socket 1 00:09:37.397 EAL: Detected lcore 94 as core 28 on socket 1 00:09:37.397 EAL: Detected lcore 95 as core 29 on socket 1 00:09:37.397 EAL: Maximum logical cores by configuration: 128 00:09:37.397 EAL: Detected CPU lcores: 96 00:09:37.397 EAL: Detected NUMA nodes: 2 00:09:37.397 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:37.397 EAL: Detected shared linkage of DPDK 00:09:37.397 EAL: No shared files mode enabled, IPC will be disabled 00:09:37.397 EAL: Bus pci wants IOVA as 'DC' 00:09:37.397 EAL: Buses did not request a specific IOVA mode. 00:09:37.397 EAL: IOMMU is available, selecting IOVA as VA mode. 00:09:37.397 EAL: Selected IOVA mode 'VA' 00:09:37.398 EAL: Probing VFIO support... 00:09:37.398 EAL: IOMMU type 1 (Type 1) is supported 00:09:37.398 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:37.398 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:37.398 EAL: VFIO support initialized 00:09:37.398 EAL: Ask a virtual area of 0x2e000 bytes 00:09:37.398 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:37.398 EAL: Setting up physically contiguous memory... 00:09:37.398 EAL: Setting maximum number of open files to 524288 00:09:37.398 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:37.398 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:09:37.398 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:37.398 EAL: Ask a virtual area of 0x61000 bytes 00:09:37.398 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:37.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:37.398 EAL: Ask a virtual area of 0x400000000 bytes 00:09:37.398 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:37.398 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:37.398 EAL: Ask a virtual area of 0x61000 bytes 00:09:37.398 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:37.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:37.398 EAL: Ask a virtual area of 0x400000000 bytes 00:09:37.398 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:37.398 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:37.398 EAL: Ask a virtual area of 0x61000 bytes 00:09:37.398 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:37.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:37.398 EAL: Ask a virtual area of 0x400000000 bytes 00:09:37.398 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:37.398 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:37.398 EAL: Ask a virtual area of 0x61000 bytes 00:09:37.398 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:37.398 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:37.398 EAL: Ask a virtual area of 0x400000000 bytes 00:09:37.398 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:37.398 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:37.398 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:09:37.398 EAL: Ask a virtual area of 0x61000 bytes 00:09:37.398 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:09:37.398 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:37.398 EAL: Ask a virtual area of 0x400000000 bytes 00:09:37.398 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:09:37.398 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:09:37.398 EAL: Ask a virtual area of 0x61000 bytes 00:09:37.398 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:09:37.398 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:37.398 EAL: Ask a virtual area of 0x400000000 bytes 00:09:37.398 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:09:37.398 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:09:37.398 EAL: Ask a virtual area of 0x61000 bytes 00:09:37.398 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:09:37.398 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:37.398 EAL: Ask a virtual area of 0x400000000 bytes 00:09:37.398 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:09:37.398 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:09:37.398 EAL: Ask a virtual area of 0x61000 bytes 00:09:37.398 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:09:37.398 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:37.398 EAL: Ask a virtual area of 0x400000000 bytes 00:09:37.398 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:09:37.398 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:09:37.398 EAL: Hugepages will be freed exactly as allocated. 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: TSC frequency is ~2300000 KHz 00:09:37.398 EAL: Main lcore 0 is ready (tid=7f30a4eafa00;cpuset=[0]) 00:09:37.398 EAL: Trying to obtain current memory policy. 00:09:37.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.398 EAL: Restoring previous memory policy: 0 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was expanded by 2MB 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:37.398 EAL: Mem event callback 'spdk:(nil)' registered 00:09:37.398 00:09:37.398 00:09:37.398 CUnit - A unit testing framework for C - Version 2.1-3 00:09:37.398 http://cunit.sourceforge.net/ 00:09:37.398 00:09:37.398 00:09:37.398 Suite: components_suite 00:09:37.398 Test: vtophys_malloc_test ...passed 00:09:37.398 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:37.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.398 EAL: Restoring previous memory policy: 4 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was expanded by 4MB 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was shrunk by 4MB 00:09:37.398 EAL: Trying to obtain current memory policy. 00:09:37.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.398 EAL: Restoring previous memory policy: 4 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was expanded by 6MB 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was shrunk by 6MB 00:09:37.398 EAL: Trying to obtain current memory policy. 00:09:37.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.398 EAL: Restoring previous memory policy: 4 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was expanded by 10MB 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was shrunk by 10MB 00:09:37.398 EAL: Trying to obtain current memory policy. 00:09:37.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.398 EAL: Restoring previous memory policy: 4 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was expanded by 18MB 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was shrunk by 18MB 00:09:37.398 EAL: Trying to obtain current memory policy. 00:09:37.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.398 EAL: Restoring previous memory policy: 4 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was expanded by 34MB 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was shrunk by 34MB 00:09:37.398 EAL: Trying to obtain current memory policy. 00:09:37.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.398 EAL: Restoring previous memory policy: 4 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was expanded by 66MB 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was shrunk by 66MB 00:09:37.398 EAL: Trying to obtain current memory policy. 00:09:37.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.398 EAL: Restoring previous memory policy: 4 00:09:37.398 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.398 EAL: request: mp_malloc_sync 00:09:37.398 EAL: No shared files mode enabled, IPC is disabled 00:09:37.398 EAL: Heap on socket 0 was expanded by 130MB 00:09:37.658 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.658 EAL: request: mp_malloc_sync 00:09:37.658 EAL: No shared files mode enabled, IPC is disabled 00:09:37.658 EAL: Heap on socket 0 was shrunk by 130MB 00:09:37.658 EAL: Trying to obtain current memory policy. 00:09:37.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.658 EAL: Restoring previous memory policy: 4 00:09:37.658 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.658 EAL: request: mp_malloc_sync 00:09:37.658 EAL: No shared files mode enabled, IPC is disabled 00:09:37.658 EAL: Heap on socket 0 was expanded by 258MB 00:09:37.658 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.658 EAL: request: mp_malloc_sync 00:09:37.658 EAL: No shared files mode enabled, IPC is disabled 00:09:37.658 EAL: Heap on socket 0 was shrunk by 258MB 00:09:37.658 EAL: Trying to obtain current memory policy. 00:09:37.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:37.658 EAL: Restoring previous memory policy: 4 00:09:37.658 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.658 EAL: request: mp_malloc_sync 00:09:37.658 EAL: No shared files mode enabled, IPC is disabled 00:09:37.658 EAL: Heap on socket 0 was expanded by 514MB 00:09:37.918 EAL: Calling mem event callback 'spdk:(nil)' 00:09:37.918 EAL: request: mp_malloc_sync 00:09:37.918 EAL: No shared files mode enabled, IPC is disabled 00:09:37.918 EAL: Heap on socket 0 was shrunk by 514MB 00:09:37.918 EAL: Trying to obtain current memory policy. 00:09:37.918 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:38.178 EAL: Restoring previous memory policy: 4 00:09:38.178 EAL: Calling mem event callback 'spdk:(nil)' 00:09:38.178 EAL: request: mp_malloc_sync 00:09:38.178 EAL: No shared files mode enabled, IPC is disabled 00:09:38.178 EAL: Heap on socket 0 was expanded by 1026MB 00:09:38.178 EAL: Calling mem event callback 'spdk:(nil)' 00:09:38.438 EAL: request: mp_malloc_sync 00:09:38.438 EAL: No shared files mode enabled, IPC is disabled 00:09:38.438 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:38.438 passed 00:09:38.438 00:09:38.438 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.438 suites 1 1 n/a 0 0 00:09:38.438 tests 2 2 2 0 0 00:09:38.438 asserts 497 497 497 0 n/a 00:09:38.438 00:09:38.438 Elapsed time = 0.965 seconds 00:09:38.438 EAL: Calling mem event callback 'spdk:(nil)' 00:09:38.438 EAL: request: mp_malloc_sync 00:09:38.438 EAL: No shared files mode enabled, IPC is disabled 00:09:38.438 EAL: Heap on socket 0 was shrunk by 2MB 00:09:38.438 EAL: No shared files mode enabled, IPC is disabled 00:09:38.438 EAL: No shared files mode enabled, IPC is disabled 00:09:38.438 EAL: No shared files mode enabled, IPC is disabled 00:09:38.438 00:09:38.438 real 0m1.094s 00:09:38.438 user 0m0.633s 00:09:38.438 sys 0m0.438s 00:09:38.438 23:51:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.438 23:51:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:38.438 ************************************ 00:09:38.438 END TEST env_vtophys 00:09:38.438 ************************************ 00:09:38.438 23:51:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/pci/pci_ut 00:09:38.438 23:51:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.438 23:51:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.438 23:51:13 env -- common/autotest_common.sh@10 -- # set +x 00:09:38.438 ************************************ 00:09:38.438 START TEST env_pci 00:09:38.438 ************************************ 00:09:38.438 23:51:13 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/pci/pci_ut 00:09:38.438 00:09:38.438 00:09:38.438 CUnit - A unit testing framework for C - Version 2.1-3 00:09:38.438 http://cunit.sourceforge.net/ 00:09:38.438 00:09:38.438 00:09:38.438 Suite: pci 00:09:38.438 Test: pci_hook ...[2024-12-09 23:51:13.325954] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 172639 has claimed it 00:09:38.438 EAL: Cannot find device (10000:00:01.0) 00:09:38.438 EAL: Failed to attach device on primary process 00:09:38.438 passed 00:09:38.438 00:09:38.438 Run Summary: Type Total Ran Passed Failed Inactive 00:09:38.438 suites 1 1 n/a 0 0 00:09:38.438 tests 1 1 1 0 0 00:09:38.438 asserts 25 25 25 0 n/a 00:09:38.438 00:09:38.438 Elapsed time = 0.026 seconds 00:09:38.438 00:09:38.438 real 0m0.042s 00:09:38.438 user 0m0.012s 00:09:38.438 sys 0m0.030s 00:09:38.438 23:51:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.438 23:51:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:38.438 ************************************ 00:09:38.438 END TEST env_pci 00:09:38.439 ************************************ 00:09:38.698 23:51:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:38.698 23:51:13 env -- env/env.sh@15 -- # uname 00:09:38.698 23:51:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:38.698 23:51:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:38.698 23:51:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:38.699 23:51:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:38.699 23:51:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.699 23:51:13 env -- common/autotest_common.sh@10 -- # set +x 00:09:38.699 ************************************ 00:09:38.699 START TEST env_dpdk_post_init 00:09:38.699 ************************************ 00:09:38.699 23:51:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:38.699 EAL: Detected CPU lcores: 96 00:09:38.699 EAL: Detected NUMA nodes: 2 00:09:38.699 EAL: Detected shared linkage of DPDK 00:09:38.699 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:38.699 EAL: Selected IOVA mode 'VA' 00:09:38.699 EAL: VFIO support initialized 00:09:38.699 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:38.699 EAL: Using IOMMU type 1 (Type 1) 00:09:38.699 EAL: Ignore mapping IO port bar(1) 00:09:38.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:09:38.699 EAL: Ignore mapping IO port bar(1) 00:09:38.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:09:38.699 EAL: Ignore mapping IO port bar(1) 00:09:38.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:09:38.699 EAL: Ignore mapping IO port bar(1) 00:09:38.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:09:38.699 EAL: Ignore mapping IO port bar(1) 00:09:38.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:09:38.699 EAL: Ignore mapping IO port bar(1) 00:09:38.699 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:09:38.959 EAL: Ignore mapping IO port bar(1) 00:09:38.959 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:09:38.959 EAL: Ignore mapping IO port bar(1) 00:09:38.959 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:09:39.528 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:09:39.528 EAL: Ignore mapping IO port bar(1) 00:09:39.528 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:09:39.528 EAL: Ignore mapping IO port bar(1) 00:09:39.528 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:09:39.528 EAL: Ignore mapping IO port bar(1) 00:09:39.528 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:09:39.528 EAL: Ignore mapping IO port bar(1) 00:09:39.528 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:09:39.528 EAL: Ignore mapping IO port bar(1) 00:09:39.528 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:09:39.528 EAL: Ignore mapping IO port bar(1) 00:09:39.528 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:09:39.788 EAL: Ignore mapping IO port bar(1) 00:09:39.788 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:09:39.788 EAL: Ignore mapping IO port bar(1) 00:09:39.788 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:09:43.081 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:09:43.081 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:09:43.081 Starting DPDK initialization... 00:09:43.081 Starting SPDK post initialization... 00:09:43.081 SPDK NVMe probe 00:09:43.081 Attaching to 0000:5e:00.0 00:09:43.081 Attached to 0000:5e:00.0 00:09:43.081 Cleaning up... 00:09:43.081 00:09:43.081 real 0m4.338s 00:09:43.081 user 0m2.951s 00:09:43.081 sys 0m0.460s 00:09:43.081 23:51:17 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.081 23:51:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:43.081 ************************************ 00:09:43.081 END TEST env_dpdk_post_init 00:09:43.081 ************************************ 00:09:43.081 23:51:17 env -- env/env.sh@26 -- # uname 00:09:43.081 23:51:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:43.082 23:51:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/mem_callbacks/mem_callbacks 00:09:43.082 23:51:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.082 23:51:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.082 23:51:17 env -- common/autotest_common.sh@10 -- # set +x 00:09:43.082 ************************************ 00:09:43.082 START TEST env_mem_callbacks 00:09:43.082 ************************************ 00:09:43.082 23:51:17 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/mem_callbacks/mem_callbacks 00:09:43.082 EAL: Detected CPU lcores: 96 00:09:43.082 EAL: Detected NUMA nodes: 2 00:09:43.082 EAL: Detected shared linkage of DPDK 00:09:43.082 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:43.082 EAL: Selected IOVA mode 'VA' 00:09:43.082 EAL: VFIO support initialized 00:09:43.082 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:43.082 00:09:43.082 00:09:43.082 CUnit - A unit testing framework for C - Version 2.1-3 00:09:43.082 http://cunit.sourceforge.net/ 00:09:43.082 00:09:43.082 00:09:43.082 Suite: memory 00:09:43.082 Test: test ... 00:09:43.082 register 0x200000200000 2097152 00:09:43.082 malloc 3145728 00:09:43.082 register 0x200000400000 4194304 00:09:43.082 buf 0x200000500000 len 3145728 PASSED 00:09:43.082 malloc 64 00:09:43.082 buf 0x2000004fff40 len 64 PASSED 00:09:43.082 malloc 4194304 00:09:43.082 register 0x200000800000 6291456 00:09:43.082 buf 0x200000a00000 len 4194304 PASSED 00:09:43.082 free 0x200000500000 3145728 00:09:43.082 free 0x2000004fff40 64 00:09:43.082 unregister 0x200000400000 4194304 PASSED 00:09:43.082 free 0x200000a00000 4194304 00:09:43.082 unregister 0x200000800000 6291456 PASSED 00:09:43.082 malloc 8388608 00:09:43.082 register 0x200000400000 10485760 00:09:43.082 buf 0x200000600000 len 8388608 PASSED 00:09:43.082 free 0x200000600000 8388608 00:09:43.082 unregister 0x200000400000 10485760 PASSED 00:09:43.082 passed 00:09:43.082 00:09:43.082 Run Summary: Type Total Ran Passed Failed Inactive 00:09:43.082 suites 1 1 n/a 0 0 00:09:43.082 tests 1 1 1 0 0 00:09:43.082 asserts 15 15 15 0 n/a 00:09:43.082 00:09:43.082 Elapsed time = 0.009 seconds 00:09:43.082 00:09:43.082 real 0m0.060s 00:09:43.082 user 0m0.024s 00:09:43.082 sys 0m0.036s 00:09:43.082 23:51:17 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.082 23:51:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:43.082 ************************************ 00:09:43.082 END TEST env_mem_callbacks 00:09:43.082 ************************************ 00:09:43.082 00:09:43.082 real 0m6.197s 00:09:43.082 user 0m3.982s 00:09:43.082 sys 0m1.298s 00:09:43.082 23:51:17 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.082 23:51:17 env -- common/autotest_common.sh@10 -- # set +x 00:09:43.082 ************************************ 00:09:43.082 END TEST env 00:09:43.082 ************************************ 00:09:43.082 23:51:17 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/rpc.sh 00:09:43.082 23:51:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.082 23:51:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.082 23:51:17 -- common/autotest_common.sh@10 -- # set +x 00:09:43.082 ************************************ 00:09:43.082 START TEST rpc 00:09:43.082 ************************************ 00:09:43.082 23:51:18 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/rpc.sh 00:09:43.342 * Looking for test storage... 00:09:43.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:09:43.342 23:51:18 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.342 23:51:18 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.342 23:51:18 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.342 23:51:18 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.342 23:51:18 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.342 23:51:18 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.342 23:51:18 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.342 23:51:18 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.342 23:51:18 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.342 23:51:18 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.342 23:51:18 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.342 23:51:18 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.342 23:51:18 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.342 23:51:18 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.342 23:51:18 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.342 23:51:18 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:43.342 23:51:18 rpc -- scripts/common.sh@345 -- # : 1 00:09:43.342 23:51:18 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.342 23:51:18 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.342 23:51:18 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:43.343 23:51:18 rpc -- scripts/common.sh@353 -- # local d=1 00:09:43.343 23:51:18 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.343 23:51:18 rpc -- scripts/common.sh@355 -- # echo 1 00:09:43.343 23:51:18 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.343 23:51:18 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:43.343 23:51:18 rpc -- scripts/common.sh@353 -- # local d=2 00:09:43.343 23:51:18 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.343 23:51:18 rpc -- scripts/common.sh@355 -- # echo 2 00:09:43.343 23:51:18 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.343 23:51:18 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.343 23:51:18 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.343 23:51:18 rpc -- scripts/common.sh@368 -- # return 0 00:09:43.343 23:51:18 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.343 23:51:18 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.343 --rc genhtml_branch_coverage=1 00:09:43.343 --rc genhtml_function_coverage=1 00:09:43.343 --rc genhtml_legend=1 00:09:43.343 --rc geninfo_all_blocks=1 00:09:43.343 --rc geninfo_unexecuted_blocks=1 00:09:43.343 00:09:43.343 ' 00:09:43.343 23:51:18 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.343 --rc genhtml_branch_coverage=1 00:09:43.343 --rc genhtml_function_coverage=1 00:09:43.343 --rc genhtml_legend=1 00:09:43.343 --rc geninfo_all_blocks=1 00:09:43.343 --rc geninfo_unexecuted_blocks=1 00:09:43.343 00:09:43.343 ' 00:09:43.343 23:51:18 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.343 --rc genhtml_branch_coverage=1 00:09:43.343 --rc genhtml_function_coverage=1 00:09:43.343 --rc genhtml_legend=1 00:09:43.343 --rc geninfo_all_blocks=1 00:09:43.343 --rc geninfo_unexecuted_blocks=1 00:09:43.343 00:09:43.343 ' 00:09:43.343 23:51:18 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.343 --rc genhtml_branch_coverage=1 00:09:43.343 --rc genhtml_function_coverage=1 00:09:43.343 --rc genhtml_legend=1 00:09:43.343 --rc geninfo_all_blocks=1 00:09:43.343 --rc geninfo_unexecuted_blocks=1 00:09:43.343 00:09:43.343 ' 00:09:43.343 23:51:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=173530 00:09:43.343 23:51:18 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -e bdev 00:09:43.343 23:51:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:43.343 23:51:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 173530 00:09:43.343 23:51:18 rpc -- common/autotest_common.sh@835 -- # '[' -z 173530 ']' 00:09:43.343 23:51:18 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.343 23:51:18 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.343 23:51:18 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.343 23:51:18 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.343 23:51:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.343 [2024-12-09 23:51:18.245239] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:09:43.343 [2024-12-09 23:51:18.245286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173530 ] 00:09:43.603 [2024-12-09 23:51:18.321585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.603 [2024-12-09 23:51:18.362269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:43.603 [2024-12-09 23:51:18.362303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 173530' to capture a snapshot of events at runtime. 00:09:43.603 [2024-12-09 23:51:18.362310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.603 [2024-12-09 23:51:18.362316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.603 [2024-12-09 23:51:18.362321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid173530 for offline analysis/debug. 00:09:43.603 [2024-12-09 23:51:18.362868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.863 23:51:18 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.863 23:51:18 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:43.863 23:51:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:09:43.863 23:51:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:09:43.863 23:51:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:43.863 23:51:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:43.863 23:51:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.863 23:51:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.863 23:51:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.863 ************************************ 00:09:43.863 START TEST rpc_integrity 00:09:43.863 ************************************ 00:09:43.863 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:43.863 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:43.863 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.863 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.863 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.863 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:43.863 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:43.863 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:43.863 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:43.863 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.863 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.863 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.863 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:43.863 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:43.863 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.863 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.863 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.863 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:43.863 { 00:09:43.863 "name": "Malloc0", 00:09:43.863 "aliases": [ 00:09:43.863 "2fbfcb6b-1cf2-4df8-8d3d-d0d87049181a" 00:09:43.863 ], 00:09:43.863 "product_name": "Malloc disk", 00:09:43.863 "block_size": 512, 00:09:43.863 "num_blocks": 16384, 00:09:43.863 "uuid": "2fbfcb6b-1cf2-4df8-8d3d-d0d87049181a", 00:09:43.863 "assigned_rate_limits": { 00:09:43.863 "rw_ios_per_sec": 0, 00:09:43.863 "rw_mbytes_per_sec": 0, 00:09:43.863 "r_mbytes_per_sec": 0, 00:09:43.863 "w_mbytes_per_sec": 0 00:09:43.863 }, 00:09:43.863 "claimed": false, 00:09:43.863 "zoned": false, 00:09:43.863 "supported_io_types": { 00:09:43.863 "read": true, 00:09:43.863 "write": true, 00:09:43.863 "unmap": true, 00:09:43.863 "flush": true, 00:09:43.863 "reset": true, 00:09:43.863 "nvme_admin": false, 00:09:43.863 "nvme_io": false, 00:09:43.863 "nvme_io_md": false, 00:09:43.863 "write_zeroes": true, 00:09:43.863 "zcopy": true, 00:09:43.863 "get_zone_info": false, 00:09:43.863 "zone_management": false, 00:09:43.863 "zone_append": false, 00:09:43.863 "compare": false, 00:09:43.863 "compare_and_write": false, 00:09:43.863 "abort": true, 00:09:43.863 "seek_hole": false, 00:09:43.863 "seek_data": false, 00:09:43.863 "copy": true, 00:09:43.863 "nvme_iov_md": false 00:09:43.863 }, 00:09:43.863 "memory_domains": [ 00:09:43.863 { 00:09:43.863 "dma_device_id": "system", 00:09:43.863 "dma_device_type": 1 00:09:43.863 }, 00:09:43.863 { 00:09:43.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.863 "dma_device_type": 2 00:09:43.863 } 00:09:43.863 ], 00:09:43.863 "driver_specific": {} 00:09:43.863 } 00:09:43.863 ]' 00:09:43.864 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:43.864 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:43.864 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:43.864 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.864 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.864 [2024-12-09 23:51:18.736735] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:43.864 [2024-12-09 23:51:18.736763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.864 [2024-12-09 23:51:18.736776] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x243c100 00:09:43.864 [2024-12-09 23:51:18.736782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.864 [2024-12-09 23:51:18.737867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.864 [2024-12-09 23:51:18.737886] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:43.864 Passthru0 00:09:43.864 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.864 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:43.864 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.864 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:43.864 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.864 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:43.864 { 00:09:43.864 "name": "Malloc0", 00:09:43.864 "aliases": [ 00:09:43.864 "2fbfcb6b-1cf2-4df8-8d3d-d0d87049181a" 00:09:43.864 ], 00:09:43.864 "product_name": "Malloc disk", 00:09:43.864 "block_size": 512, 00:09:43.864 "num_blocks": 16384, 00:09:43.864 "uuid": "2fbfcb6b-1cf2-4df8-8d3d-d0d87049181a", 00:09:43.864 "assigned_rate_limits": { 00:09:43.864 "rw_ios_per_sec": 0, 00:09:43.864 "rw_mbytes_per_sec": 0, 00:09:43.864 "r_mbytes_per_sec": 0, 00:09:43.864 "w_mbytes_per_sec": 0 00:09:43.864 }, 00:09:43.864 "claimed": true, 00:09:43.864 "claim_type": "exclusive_write", 00:09:43.864 "zoned": false, 00:09:43.864 "supported_io_types": { 00:09:43.864 "read": true, 00:09:43.864 "write": true, 00:09:43.864 "unmap": true, 00:09:43.864 "flush": true, 00:09:43.864 "reset": true, 00:09:43.864 "nvme_admin": false, 00:09:43.864 "nvme_io": false, 00:09:43.864 "nvme_io_md": false, 00:09:43.864 "write_zeroes": true, 00:09:43.864 "zcopy": true, 00:09:43.864 "get_zone_info": false, 00:09:43.864 "zone_management": false, 00:09:43.864 "zone_append": false, 00:09:43.864 "compare": false, 00:09:43.864 "compare_and_write": false, 00:09:43.864 "abort": true, 00:09:43.864 "seek_hole": false, 00:09:43.864 "seek_data": false, 00:09:43.864 "copy": true, 00:09:43.864 "nvme_iov_md": false 00:09:43.864 }, 00:09:43.864 "memory_domains": [ 00:09:43.864 { 00:09:43.864 "dma_device_id": "system", 00:09:43.864 "dma_device_type": 1 00:09:43.864 }, 00:09:43.864 { 00:09:43.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.864 "dma_device_type": 2 00:09:43.864 } 00:09:43.864 ], 00:09:43.864 "driver_specific": {} 00:09:43.864 }, 00:09:43.864 { 00:09:43.864 "name": "Passthru0", 00:09:43.864 "aliases": [ 00:09:43.864 "1e850f47-b5eb-5dcc-9b51-bb5aedaf03fd" 00:09:43.864 ], 00:09:43.864 "product_name": "passthru", 00:09:43.864 "block_size": 512, 00:09:43.864 "num_blocks": 16384, 00:09:43.864 "uuid": "1e850f47-b5eb-5dcc-9b51-bb5aedaf03fd", 00:09:43.864 "assigned_rate_limits": { 00:09:43.864 "rw_ios_per_sec": 0, 00:09:43.864 "rw_mbytes_per_sec": 0, 00:09:43.864 "r_mbytes_per_sec": 0, 00:09:43.864 "w_mbytes_per_sec": 0 00:09:43.864 }, 00:09:43.864 "claimed": false, 00:09:43.864 "zoned": false, 00:09:43.864 "supported_io_types": { 00:09:43.864 "read": true, 00:09:43.864 "write": true, 00:09:43.864 "unmap": true, 00:09:43.864 "flush": true, 00:09:43.864 "reset": true, 00:09:43.864 "nvme_admin": false, 00:09:43.864 "nvme_io": false, 00:09:43.864 "nvme_io_md": false, 00:09:43.864 "write_zeroes": true, 00:09:43.864 "zcopy": true, 00:09:43.864 "get_zone_info": false, 00:09:43.864 "zone_management": false, 00:09:43.864 "zone_append": false, 00:09:43.864 "compare": false, 00:09:43.864 "compare_and_write": false, 00:09:43.864 "abort": true, 00:09:43.864 "seek_hole": false, 00:09:43.864 "seek_data": false, 00:09:43.864 "copy": true, 00:09:43.864 "nvme_iov_md": false 00:09:43.864 }, 00:09:43.864 "memory_domains": [ 00:09:43.864 { 00:09:43.864 "dma_device_id": "system", 00:09:43.864 "dma_device_type": 1 00:09:43.864 }, 00:09:43.864 { 00:09:43.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.864 "dma_device_type": 2 00:09:43.864 } 00:09:43.864 ], 00:09:43.864 "driver_specific": { 00:09:43.864 "passthru": { 00:09:43.864 "name": "Passthru0", 00:09:43.864 "base_bdev_name": "Malloc0" 00:09:43.864 } 00:09:43.864 } 00:09:43.864 } 00:09:43.864 ]' 00:09:43.864 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:44.124 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:44.124 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:44.124 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.124 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.124 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:44.124 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.124 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.124 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:44.124 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.124 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.124 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:44.124 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:44.124 23:51:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:44.124 00:09:44.124 real 0m0.271s 00:09:44.124 user 0m0.176s 00:09:44.124 sys 0m0.035s 00:09:44.124 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.124 23:51:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 ************************************ 00:09:44.124 END TEST rpc_integrity 00:09:44.124 ************************************ 00:09:44.124 23:51:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:44.124 23:51:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.124 23:51:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.124 23:51:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 ************************************ 00:09:44.124 START TEST rpc_plugins 00:09:44.124 ************************************ 00:09:44.124 23:51:18 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:44.124 23:51:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:44.124 23:51:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.124 23:51:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 23:51:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.124 23:51:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:44.124 23:51:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:44.124 23:51:18 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.124 23:51:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 23:51:18 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.124 23:51:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:44.124 { 00:09:44.124 "name": "Malloc1", 00:09:44.124 "aliases": [ 00:09:44.124 "c8605706-b9aa-4f44-86ec-65eca107d025" 00:09:44.124 ], 00:09:44.124 "product_name": "Malloc disk", 00:09:44.124 "block_size": 4096, 00:09:44.124 "num_blocks": 256, 00:09:44.124 "uuid": "c8605706-b9aa-4f44-86ec-65eca107d025", 00:09:44.124 "assigned_rate_limits": { 00:09:44.124 "rw_ios_per_sec": 0, 00:09:44.124 "rw_mbytes_per_sec": 0, 00:09:44.124 "r_mbytes_per_sec": 0, 00:09:44.124 "w_mbytes_per_sec": 0 00:09:44.124 }, 00:09:44.124 "claimed": false, 00:09:44.124 "zoned": false, 00:09:44.124 "supported_io_types": { 00:09:44.124 "read": true, 00:09:44.124 "write": true, 00:09:44.124 "unmap": true, 00:09:44.124 "flush": true, 00:09:44.124 "reset": true, 00:09:44.124 "nvme_admin": false, 00:09:44.124 "nvme_io": false, 00:09:44.124 "nvme_io_md": false, 00:09:44.124 "write_zeroes": true, 00:09:44.124 "zcopy": true, 00:09:44.124 "get_zone_info": false, 00:09:44.124 "zone_management": false, 00:09:44.124 "zone_append": false, 00:09:44.124 "compare": false, 00:09:44.124 "compare_and_write": false, 00:09:44.124 "abort": true, 00:09:44.124 "seek_hole": false, 00:09:44.124 "seek_data": false, 00:09:44.124 "copy": true, 00:09:44.124 "nvme_iov_md": false 00:09:44.124 }, 00:09:44.124 "memory_domains": [ 00:09:44.124 { 00:09:44.124 "dma_device_id": "system", 00:09:44.124 "dma_device_type": 1 00:09:44.124 }, 00:09:44.124 { 00:09:44.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.124 "dma_device_type": 2 00:09:44.124 } 00:09:44.124 ], 00:09:44.124 "driver_specific": {} 00:09:44.124 } 00:09:44.124 ]' 00:09:44.124 23:51:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:44.124 23:51:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:44.124 23:51:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:44.124 23:51:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.124 23:51:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 23:51:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.124 23:51:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:44.124 23:51:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.124 23:51:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:44.124 23:51:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.124 23:51:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:44.124 23:51:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:44.384 23:51:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:44.384 00:09:44.384 real 0m0.139s 00:09:44.384 user 0m0.090s 00:09:44.384 sys 0m0.015s 00:09:44.384 23:51:19 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.384 23:51:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:44.384 ************************************ 00:09:44.384 END TEST rpc_plugins 00:09:44.384 ************************************ 00:09:44.384 23:51:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:44.384 23:51:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.384 23:51:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.384 23:51:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.384 ************************************ 00:09:44.384 START TEST rpc_trace_cmd_test 00:09:44.384 ************************************ 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:44.384 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid173530", 00:09:44.384 "tpoint_group_mask": "0x8", 00:09:44.384 "iscsi_conn": { 00:09:44.384 "mask": "0x2", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "scsi": { 00:09:44.384 "mask": "0x4", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "bdev": { 00:09:44.384 "mask": "0x8", 00:09:44.384 "tpoint_mask": "0xffffffffffffffff" 00:09:44.384 }, 00:09:44.384 "nvmf_rdma": { 00:09:44.384 "mask": "0x10", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "nvmf_tcp": { 00:09:44.384 "mask": "0x20", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "ftl": { 00:09:44.384 "mask": "0x40", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "blobfs": { 00:09:44.384 "mask": "0x80", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "dsa": { 00:09:44.384 "mask": "0x200", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "thread": { 00:09:44.384 "mask": "0x400", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "nvme_pcie": { 00:09:44.384 "mask": "0x800", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "iaa": { 00:09:44.384 "mask": "0x1000", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "nvme_tcp": { 00:09:44.384 "mask": "0x2000", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "bdev_nvme": { 00:09:44.384 "mask": "0x4000", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "sock": { 00:09:44.384 "mask": "0x8000", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "blob": { 00:09:44.384 "mask": "0x10000", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "bdev_raid": { 00:09:44.384 "mask": "0x20000", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 }, 00:09:44.384 "scheduler": { 00:09:44.384 "mask": "0x40000", 00:09:44.384 "tpoint_mask": "0x0" 00:09:44.384 } 00:09:44.384 }' 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:44.384 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:44.644 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:44.644 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:44.644 23:51:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:44.644 00:09:44.644 real 0m0.221s 00:09:44.644 user 0m0.180s 00:09:44.644 sys 0m0.034s 00:09:44.644 23:51:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.644 23:51:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.644 ************************************ 00:09:44.644 END TEST rpc_trace_cmd_test 00:09:44.644 ************************************ 00:09:44.644 23:51:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:44.644 23:51:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:44.644 23:51:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:44.644 23:51:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.644 23:51:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.644 23:51:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.644 ************************************ 00:09:44.644 START TEST rpc_daemon_integrity 00:09:44.644 ************************************ 00:09:44.644 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:44.644 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:44.644 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.644 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.644 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.644 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:44.644 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:44.644 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:44.645 { 00:09:44.645 "name": "Malloc2", 00:09:44.645 "aliases": [ 00:09:44.645 "38c4678a-4819-4527-8ae1-f9b1d950bb97" 00:09:44.645 ], 00:09:44.645 "product_name": "Malloc disk", 00:09:44.645 "block_size": 512, 00:09:44.645 "num_blocks": 16384, 00:09:44.645 "uuid": "38c4678a-4819-4527-8ae1-f9b1d950bb97", 00:09:44.645 "assigned_rate_limits": { 00:09:44.645 "rw_ios_per_sec": 0, 00:09:44.645 "rw_mbytes_per_sec": 0, 00:09:44.645 "r_mbytes_per_sec": 0, 00:09:44.645 "w_mbytes_per_sec": 0 00:09:44.645 }, 00:09:44.645 "claimed": false, 00:09:44.645 "zoned": false, 00:09:44.645 "supported_io_types": { 00:09:44.645 "read": true, 00:09:44.645 "write": true, 00:09:44.645 "unmap": true, 00:09:44.645 "flush": true, 00:09:44.645 "reset": true, 00:09:44.645 "nvme_admin": false, 00:09:44.645 "nvme_io": false, 00:09:44.645 "nvme_io_md": false, 00:09:44.645 "write_zeroes": true, 00:09:44.645 "zcopy": true, 00:09:44.645 "get_zone_info": false, 00:09:44.645 "zone_management": false, 00:09:44.645 "zone_append": false, 00:09:44.645 "compare": false, 00:09:44.645 "compare_and_write": false, 00:09:44.645 "abort": true, 00:09:44.645 "seek_hole": false, 00:09:44.645 "seek_data": false, 00:09:44.645 "copy": true, 00:09:44.645 "nvme_iov_md": false 00:09:44.645 }, 00:09:44.645 "memory_domains": [ 00:09:44.645 { 00:09:44.645 "dma_device_id": "system", 00:09:44.645 "dma_device_type": 1 00:09:44.645 }, 00:09:44.645 { 00:09:44.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.645 "dma_device_type": 2 00:09:44.645 } 00:09:44.645 ], 00:09:44.645 "driver_specific": {} 00:09:44.645 } 00:09:44.645 ]' 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.645 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 [2024-12-09 23:51:19.583020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:44.905 [2024-12-09 23:51:19.583046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.905 [2024-12-09 23:51:19.583058] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22fa450 00:09:44.905 [2024-12-09 23:51:19.583064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.905 [2024-12-09 23:51:19.584046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.905 [2024-12-09 23:51:19.584065] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:44.905 Passthru0 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:44.905 { 00:09:44.905 "name": "Malloc2", 00:09:44.905 "aliases": [ 00:09:44.905 "38c4678a-4819-4527-8ae1-f9b1d950bb97" 00:09:44.905 ], 00:09:44.905 "product_name": "Malloc disk", 00:09:44.905 "block_size": 512, 00:09:44.905 "num_blocks": 16384, 00:09:44.905 "uuid": "38c4678a-4819-4527-8ae1-f9b1d950bb97", 00:09:44.905 "assigned_rate_limits": { 00:09:44.905 "rw_ios_per_sec": 0, 00:09:44.905 "rw_mbytes_per_sec": 0, 00:09:44.905 "r_mbytes_per_sec": 0, 00:09:44.905 "w_mbytes_per_sec": 0 00:09:44.905 }, 00:09:44.905 "claimed": true, 00:09:44.905 "claim_type": "exclusive_write", 00:09:44.905 "zoned": false, 00:09:44.905 "supported_io_types": { 00:09:44.905 "read": true, 00:09:44.905 "write": true, 00:09:44.905 "unmap": true, 00:09:44.905 "flush": true, 00:09:44.905 "reset": true, 00:09:44.905 "nvme_admin": false, 00:09:44.905 "nvme_io": false, 00:09:44.905 "nvme_io_md": false, 00:09:44.905 "write_zeroes": true, 00:09:44.905 "zcopy": true, 00:09:44.905 "get_zone_info": false, 00:09:44.905 "zone_management": false, 00:09:44.905 "zone_append": false, 00:09:44.905 "compare": false, 00:09:44.905 "compare_and_write": false, 00:09:44.905 "abort": true, 00:09:44.905 "seek_hole": false, 00:09:44.905 "seek_data": false, 00:09:44.905 "copy": true, 00:09:44.905 "nvme_iov_md": false 00:09:44.905 }, 00:09:44.905 "memory_domains": [ 00:09:44.905 { 00:09:44.905 "dma_device_id": "system", 00:09:44.905 "dma_device_type": 1 00:09:44.905 }, 00:09:44.905 { 00:09:44.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.905 "dma_device_type": 2 00:09:44.905 } 00:09:44.905 ], 00:09:44.905 "driver_specific": {} 00:09:44.905 }, 00:09:44.905 { 00:09:44.905 "name": "Passthru0", 00:09:44.905 "aliases": [ 00:09:44.905 "927db720-b6fa-50c6-af10-058ab554cc5a" 00:09:44.905 ], 00:09:44.905 "product_name": "passthru", 00:09:44.905 "block_size": 512, 00:09:44.905 "num_blocks": 16384, 00:09:44.905 "uuid": "927db720-b6fa-50c6-af10-058ab554cc5a", 00:09:44.905 "assigned_rate_limits": { 00:09:44.905 "rw_ios_per_sec": 0, 00:09:44.905 "rw_mbytes_per_sec": 0, 00:09:44.905 "r_mbytes_per_sec": 0, 00:09:44.905 "w_mbytes_per_sec": 0 00:09:44.905 }, 00:09:44.905 "claimed": false, 00:09:44.905 "zoned": false, 00:09:44.905 "supported_io_types": { 00:09:44.905 "read": true, 00:09:44.905 "write": true, 00:09:44.905 "unmap": true, 00:09:44.905 "flush": true, 00:09:44.905 "reset": true, 00:09:44.905 "nvme_admin": false, 00:09:44.905 "nvme_io": false, 00:09:44.905 "nvme_io_md": false, 00:09:44.905 "write_zeroes": true, 00:09:44.905 "zcopy": true, 00:09:44.905 "get_zone_info": false, 00:09:44.905 "zone_management": false, 00:09:44.905 "zone_append": false, 00:09:44.905 "compare": false, 00:09:44.905 "compare_and_write": false, 00:09:44.905 "abort": true, 00:09:44.905 "seek_hole": false, 00:09:44.905 "seek_data": false, 00:09:44.905 "copy": true, 00:09:44.905 "nvme_iov_md": false 00:09:44.905 }, 00:09:44.905 "memory_domains": [ 00:09:44.905 { 00:09:44.905 "dma_device_id": "system", 00:09:44.905 "dma_device_type": 1 00:09:44.905 }, 00:09:44.905 { 00:09:44.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.905 "dma_device_type": 2 00:09:44.905 } 00:09:44.905 ], 00:09:44.905 "driver_specific": { 00:09:44.905 "passthru": { 00:09:44.905 "name": "Passthru0", 00:09:44.905 "base_bdev_name": "Malloc2" 00:09:44.905 } 00:09:44.905 } 00:09:44.905 } 00:09:44.905 ]' 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:44.905 00:09:44.905 real 0m0.280s 00:09:44.905 user 0m0.178s 00:09:44.905 sys 0m0.038s 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.905 23:51:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:44.905 ************************************ 00:09:44.905 END TEST rpc_daemon_integrity 00:09:44.905 ************************************ 00:09:44.905 23:51:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:44.906 23:51:19 rpc -- rpc/rpc.sh@84 -- # killprocess 173530 00:09:44.906 23:51:19 rpc -- common/autotest_common.sh@954 -- # '[' -z 173530 ']' 00:09:44.906 23:51:19 rpc -- common/autotest_common.sh@958 -- # kill -0 173530 00:09:44.906 23:51:19 rpc -- common/autotest_common.sh@959 -- # uname 00:09:44.906 23:51:19 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.906 23:51:19 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173530 00:09:44.906 23:51:19 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.906 23:51:19 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.906 23:51:19 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173530' 00:09:44.906 killing process with pid 173530 00:09:44.906 23:51:19 rpc -- common/autotest_common.sh@973 -- # kill 173530 00:09:44.906 23:51:19 rpc -- common/autotest_common.sh@978 -- # wait 173530 00:09:45.475 00:09:45.475 real 0m2.100s 00:09:45.475 user 0m2.677s 00:09:45.475 sys 0m0.702s 00:09:45.475 23:51:20 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.475 23:51:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.475 ************************************ 00:09:45.475 END TEST rpc 00:09:45.475 ************************************ 00:09:45.475 23:51:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/skip_rpc.sh 00:09:45.475 23:51:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.475 23:51:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.475 23:51:20 -- common/autotest_common.sh@10 -- # set +x 00:09:45.475 ************************************ 00:09:45.475 START TEST skip_rpc 00:09:45.475 ************************************ 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/skip_rpc.sh 00:09:45.475 * Looking for test storage... 00:09:45.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.475 23:51:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.475 --rc genhtml_branch_coverage=1 00:09:45.475 --rc genhtml_function_coverage=1 00:09:45.475 --rc genhtml_legend=1 00:09:45.475 --rc geninfo_all_blocks=1 00:09:45.475 --rc geninfo_unexecuted_blocks=1 00:09:45.475 00:09:45.475 ' 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.475 --rc genhtml_branch_coverage=1 00:09:45.475 --rc genhtml_function_coverage=1 00:09:45.475 --rc genhtml_legend=1 00:09:45.475 --rc geninfo_all_blocks=1 00:09:45.475 --rc geninfo_unexecuted_blocks=1 00:09:45.475 00:09:45.475 ' 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.475 --rc genhtml_branch_coverage=1 00:09:45.475 --rc genhtml_function_coverage=1 00:09:45.475 --rc genhtml_legend=1 00:09:45.475 --rc geninfo_all_blocks=1 00:09:45.475 --rc geninfo_unexecuted_blocks=1 00:09:45.475 00:09:45.475 ' 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.475 --rc genhtml_branch_coverage=1 00:09:45.475 --rc genhtml_function_coverage=1 00:09:45.475 --rc genhtml_legend=1 00:09:45.475 --rc geninfo_all_blocks=1 00:09:45.475 --rc geninfo_unexecuted_blocks=1 00:09:45.475 00:09:45.475 ' 00:09:45.475 23:51:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:09:45.475 23:51:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:09:45.475 23:51:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.475 23:51:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.475 ************************************ 00:09:45.475 START TEST skip_rpc 00:09:45.475 ************************************ 00:09:45.475 23:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:45.475 23:51:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=174102 00:09:45.475 23:51:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:45.475 23:51:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:45.475 23:51:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:45.736 [2024-12-09 23:51:20.441786] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:09:45.736 [2024-12-09 23:51:20.441827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174102 ] 00:09:45.736 [2024-12-09 23:51:20.518061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.736 [2024-12-09 23:51:20.557726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.014 23:51:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 174102 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 174102 ']' 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 174102 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174102 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174102' 00:09:51.015 killing process with pid 174102 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 174102 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 174102 00:09:51.015 00:09:51.015 real 0m5.366s 00:09:51.015 user 0m5.119s 00:09:51.015 sys 0m0.284s 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.015 23:51:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.015 ************************************ 00:09:51.015 END TEST skip_rpc 00:09:51.015 ************************************ 00:09:51.015 23:51:25 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:51.015 23:51:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.015 23:51:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.015 23:51:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.015 ************************************ 00:09:51.015 START TEST skip_rpc_with_json 00:09:51.015 ************************************ 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=175050 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 175050 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 175050 ']' 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.015 23:51:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:51.015 [2024-12-09 23:51:25.880088] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:09:51.015 [2024-12-09 23:51:25.880128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175050 ] 00:09:51.280 [2024-12-09 23:51:25.957116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.280 [2024-12-09 23:51:25.998091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.280 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.280 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:51.280 23:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:51.280 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.280 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:51.280 [2024-12-09 23:51:26.209892] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:51.540 request: 00:09:51.540 { 00:09:51.540 "trtype": "tcp", 00:09:51.540 "method": "nvmf_get_transports", 00:09:51.540 "req_id": 1 00:09:51.540 } 00:09:51.540 Got JSON-RPC error response 00:09:51.540 response: 00:09:51.540 { 00:09:51.540 "code": -19, 00:09:51.540 "message": "No such device" 00:09:51.540 } 00:09:51.540 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:51.540 23:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:51.540 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.540 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:51.540 [2024-12-09 23:51:26.222002] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.540 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.540 23:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:51.540 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.540 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:51.540 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.540 23:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:09:51.540 { 00:09:51.540 "subsystems": [ 00:09:51.540 { 00:09:51.540 "subsystem": "fsdev", 00:09:51.540 "config": [ 00:09:51.540 { 00:09:51.540 "method": "fsdev_set_opts", 00:09:51.540 "params": { 00:09:51.540 "fsdev_io_pool_size": 65535, 00:09:51.540 "fsdev_io_cache_size": 256 00:09:51.540 } 00:09:51.540 } 00:09:51.540 ] 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "subsystem": "vfio_user_target", 00:09:51.540 "config": null 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "subsystem": "keyring", 00:09:51.540 "config": [] 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "subsystem": "iobuf", 00:09:51.540 "config": [ 00:09:51.540 { 00:09:51.540 "method": "iobuf_set_options", 00:09:51.540 "params": { 00:09:51.540 "small_pool_count": 8192, 00:09:51.540 "large_pool_count": 1024, 00:09:51.540 "small_bufsize": 8192, 00:09:51.540 "large_bufsize": 135168, 00:09:51.540 "enable_numa": false 00:09:51.540 } 00:09:51.540 } 00:09:51.540 ] 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "subsystem": "sock", 00:09:51.540 "config": [ 00:09:51.540 { 00:09:51.540 "method": "sock_set_default_impl", 00:09:51.540 "params": { 00:09:51.540 "impl_name": "posix" 00:09:51.540 } 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "method": "sock_impl_set_options", 00:09:51.540 "params": { 00:09:51.540 "impl_name": "ssl", 00:09:51.540 "recv_buf_size": 4096, 00:09:51.540 "send_buf_size": 4096, 00:09:51.540 "enable_recv_pipe": true, 00:09:51.540 "enable_quickack": false, 00:09:51.540 "enable_placement_id": 0, 00:09:51.540 "enable_zerocopy_send_server": true, 00:09:51.540 "enable_zerocopy_send_client": false, 00:09:51.540 "zerocopy_threshold": 0, 00:09:51.540 "tls_version": 0, 00:09:51.540 "enable_ktls": false 00:09:51.540 } 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "method": "sock_impl_set_options", 00:09:51.540 "params": { 00:09:51.540 "impl_name": "posix", 00:09:51.540 "recv_buf_size": 2097152, 00:09:51.540 "send_buf_size": 2097152, 00:09:51.540 "enable_recv_pipe": true, 00:09:51.540 "enable_quickack": false, 00:09:51.540 "enable_placement_id": 0, 00:09:51.540 "enable_zerocopy_send_server": true, 00:09:51.540 "enable_zerocopy_send_client": false, 00:09:51.540 "zerocopy_threshold": 0, 00:09:51.540 "tls_version": 0, 00:09:51.540 "enable_ktls": false 00:09:51.540 } 00:09:51.540 } 00:09:51.540 ] 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "subsystem": "vmd", 00:09:51.540 "config": [] 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "subsystem": "accel", 00:09:51.540 "config": [ 00:09:51.540 { 00:09:51.540 "method": "accel_set_options", 00:09:51.540 "params": { 00:09:51.540 "small_cache_size": 128, 00:09:51.540 "large_cache_size": 16, 00:09:51.540 "task_count": 2048, 00:09:51.540 "sequence_count": 2048, 00:09:51.540 "buf_count": 2048 00:09:51.540 } 00:09:51.540 } 00:09:51.540 ] 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "subsystem": "bdev", 00:09:51.540 "config": [ 00:09:51.540 { 00:09:51.540 "method": "bdev_set_options", 00:09:51.540 "params": { 00:09:51.540 "bdev_io_pool_size": 65535, 00:09:51.540 "bdev_io_cache_size": 256, 00:09:51.540 "bdev_auto_examine": true, 00:09:51.540 "iobuf_small_cache_size": 128, 00:09:51.540 "iobuf_large_cache_size": 16 00:09:51.540 } 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "method": "bdev_raid_set_options", 00:09:51.540 "params": { 00:09:51.540 "process_window_size_kb": 1024, 00:09:51.540 "process_max_bandwidth_mb_sec": 0 00:09:51.540 } 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "method": "bdev_iscsi_set_options", 00:09:51.540 "params": { 00:09:51.540 "timeout_sec": 30 00:09:51.540 } 00:09:51.540 }, 00:09:51.540 { 00:09:51.540 "method": "bdev_nvme_set_options", 00:09:51.540 "params": { 00:09:51.540 "action_on_timeout": "none", 00:09:51.540 "timeout_us": 0, 00:09:51.540 "timeout_admin_us": 0, 00:09:51.540 "keep_alive_timeout_ms": 10000, 00:09:51.540 "arbitration_burst": 0, 00:09:51.541 "low_priority_weight": 0, 00:09:51.541 "medium_priority_weight": 0, 00:09:51.541 "high_priority_weight": 0, 00:09:51.541 "nvme_adminq_poll_period_us": 10000, 00:09:51.541 "nvme_ioq_poll_period_us": 0, 00:09:51.541 "io_queue_requests": 0, 00:09:51.541 "delay_cmd_submit": true, 00:09:51.541 "transport_retry_count": 4, 00:09:51.541 "bdev_retry_count": 3, 00:09:51.541 "transport_ack_timeout": 0, 00:09:51.541 "ctrlr_loss_timeout_sec": 0, 00:09:51.541 "reconnect_delay_sec": 0, 00:09:51.541 "fast_io_fail_timeout_sec": 0, 00:09:51.541 "disable_auto_failback": false, 00:09:51.541 "generate_uuids": false, 00:09:51.541 "transport_tos": 0, 00:09:51.541 "nvme_error_stat": false, 00:09:51.541 "rdma_srq_size": 0, 00:09:51.541 "io_path_stat": false, 00:09:51.541 "allow_accel_sequence": false, 00:09:51.541 "rdma_max_cq_size": 0, 00:09:51.541 "rdma_cm_event_timeout_ms": 0, 00:09:51.541 "dhchap_digests": [ 00:09:51.541 "sha256", 00:09:51.541 "sha384", 00:09:51.541 "sha512" 00:09:51.541 ], 00:09:51.541 "dhchap_dhgroups": [ 00:09:51.541 "null", 00:09:51.541 "ffdhe2048", 00:09:51.541 "ffdhe3072", 00:09:51.541 "ffdhe4096", 00:09:51.541 "ffdhe6144", 00:09:51.541 "ffdhe8192" 00:09:51.541 ], 00:09:51.541 "rdma_umr_per_io": false 00:09:51.541 } 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "method": "bdev_nvme_set_hotplug", 00:09:51.541 "params": { 00:09:51.541 "period_us": 100000, 00:09:51.541 "enable": false 00:09:51.541 } 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "method": "bdev_wait_for_examine" 00:09:51.541 } 00:09:51.541 ] 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "subsystem": "scsi", 00:09:51.541 "config": null 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "subsystem": "scheduler", 00:09:51.541 "config": [ 00:09:51.541 { 00:09:51.541 "method": "framework_set_scheduler", 00:09:51.541 "params": { 00:09:51.541 "name": "static" 00:09:51.541 } 00:09:51.541 } 00:09:51.541 ] 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "subsystem": "vhost_scsi", 00:09:51.541 "config": [] 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "subsystem": "vhost_blk", 00:09:51.541 "config": [] 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "subsystem": "ublk", 00:09:51.541 "config": [] 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "subsystem": "nbd", 00:09:51.541 "config": [] 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "subsystem": "nvmf", 00:09:51.541 "config": [ 00:09:51.541 { 00:09:51.541 "method": "nvmf_set_config", 00:09:51.541 "params": { 00:09:51.541 "discovery_filter": "match_any", 00:09:51.541 "admin_cmd_passthru": { 00:09:51.541 "identify_ctrlr": false 00:09:51.541 }, 00:09:51.541 "dhchap_digests": [ 00:09:51.541 "sha256", 00:09:51.541 "sha384", 00:09:51.541 "sha512" 00:09:51.541 ], 00:09:51.541 "dhchap_dhgroups": [ 00:09:51.541 "null", 00:09:51.541 "ffdhe2048", 00:09:51.541 "ffdhe3072", 00:09:51.541 "ffdhe4096", 00:09:51.541 "ffdhe6144", 00:09:51.541 "ffdhe8192" 00:09:51.541 ] 00:09:51.541 } 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "method": "nvmf_set_max_subsystems", 00:09:51.541 "params": { 00:09:51.541 "max_subsystems": 1024 00:09:51.541 } 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "method": "nvmf_set_crdt", 00:09:51.541 "params": { 00:09:51.541 "crdt1": 0, 00:09:51.541 "crdt2": 0, 00:09:51.541 "crdt3": 0 00:09:51.541 } 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "method": "nvmf_create_transport", 00:09:51.541 "params": { 00:09:51.541 "trtype": "TCP", 00:09:51.541 "max_queue_depth": 128, 00:09:51.541 "max_io_qpairs_per_ctrlr": 127, 00:09:51.541 "in_capsule_data_size": 4096, 00:09:51.541 "max_io_size": 131072, 00:09:51.541 "io_unit_size": 131072, 00:09:51.541 "max_aq_depth": 128, 00:09:51.541 "num_shared_buffers": 511, 00:09:51.541 "buf_cache_size": 4294967295, 00:09:51.541 "dif_insert_or_strip": false, 00:09:51.541 "zcopy": false, 00:09:51.541 "c2h_success": true, 00:09:51.541 "sock_priority": 0, 00:09:51.541 "abort_timeout_sec": 1, 00:09:51.541 "ack_timeout": 0, 00:09:51.541 "data_wr_pool_size": 0 00:09:51.541 } 00:09:51.541 } 00:09:51.541 ] 00:09:51.541 }, 00:09:51.541 { 00:09:51.541 "subsystem": "iscsi", 00:09:51.541 "config": [ 00:09:51.541 { 00:09:51.541 "method": "iscsi_set_options", 00:09:51.541 "params": { 00:09:51.541 "node_base": "iqn.2016-06.io.spdk", 00:09:51.541 "max_sessions": 128, 00:09:51.541 "max_connections_per_session": 2, 00:09:51.541 "max_queue_depth": 64, 00:09:51.541 "default_time2wait": 2, 00:09:51.541 "default_time2retain": 20, 00:09:51.541 "first_burst_length": 8192, 00:09:51.541 "immediate_data": true, 00:09:51.541 "allow_duplicated_isid": false, 00:09:51.541 "error_recovery_level": 0, 00:09:51.541 "nop_timeout": 60, 00:09:51.541 "nop_in_interval": 30, 00:09:51.541 "disable_chap": false, 00:09:51.541 "require_chap": false, 00:09:51.541 "mutual_chap": false, 00:09:51.541 "chap_group": 0, 00:09:51.541 "max_large_datain_per_connection": 64, 00:09:51.541 "max_r2t_per_connection": 4, 00:09:51.541 "pdu_pool_size": 36864, 00:09:51.541 "immediate_data_pool_size": 16384, 00:09:51.541 "data_out_pool_size": 2048 00:09:51.541 } 00:09:51.541 } 00:09:51.541 ] 00:09:51.541 } 00:09:51.541 ] 00:09:51.541 } 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 175050 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 175050 ']' 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 175050 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 175050 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 175050' 00:09:51.541 killing process with pid 175050 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 175050 00:09:51.541 23:51:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 175050 00:09:52.110 23:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=175282 00:09:52.110 23:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:09:52.110 23:51:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:57.388 23:51:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 175282 00:09:57.388 23:51:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 175282 ']' 00:09:57.388 23:51:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 175282 00:09:57.388 23:51:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:57.388 23:51:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.388 23:51:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 175282 00:09:57.388 23:51:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:57.388 23:51:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:57.388 23:51:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 175282' 00:09:57.388 killing process with pid 175282 00:09:57.388 23:51:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 175282 00:09:57.388 23:51:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 175282 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:09:57.388 00:09:57.388 real 0m6.284s 00:09:57.388 user 0m5.976s 00:09:57.388 sys 0m0.606s 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:57.388 ************************************ 00:09:57.388 END TEST skip_rpc_with_json 00:09:57.388 ************************************ 00:09:57.388 23:51:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:57.388 23:51:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.388 23:51:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.388 23:51:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.388 ************************************ 00:09:57.388 START TEST skip_rpc_with_delay 00:09:57.388 ************************************ 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt ]] 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:57.388 [2024-12-09 23:51:32.239672] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.388 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.388 00:09:57.388 real 0m0.070s 00:09:57.388 user 0m0.046s 00:09:57.389 sys 0m0.024s 00:09:57.389 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.389 23:51:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:57.389 ************************************ 00:09:57.389 END TEST skip_rpc_with_delay 00:09:57.389 ************************************ 00:09:57.389 23:51:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:57.389 23:51:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:57.389 23:51:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:57.389 23:51:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.389 23:51:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.389 23:51:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.649 ************************************ 00:09:57.649 START TEST exit_on_failed_rpc_init 00:09:57.649 ************************************ 00:09:57.649 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:57.649 23:51:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=176262 00:09:57.649 23:51:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 176262 00:09:57.649 23:51:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:09:57.649 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 176262 ']' 00:09:57.649 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.649 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.649 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.649 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.649 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:57.649 [2024-12-09 23:51:32.383202] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:09:57.649 [2024-12-09 23:51:32.383245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176262 ] 00:09:57.649 [2024-12-09 23:51:32.456960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.649 [2024-12-09 23:51:32.497974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt ]] 00:09:57.908 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:09:57.908 [2024-12-09 23:51:32.762069] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:09:57.908 [2024-12-09 23:51:32.762113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176268 ] 00:09:57.908 [2024-12-09 23:51:32.838725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.168 [2024-12-09 23:51:32.878857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.168 [2024-12-09 23:51:32.878907] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:58.168 [2024-12-09 23:51:32.878917] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:58.168 [2024-12-09 23:51:32.878923] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 176262 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 176262 ']' 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 176262 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 176262 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 176262' 00:09:58.168 killing process with pid 176262 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 176262 00:09:58.168 23:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 176262 00:09:58.428 00:09:58.428 real 0m0.952s 00:09:58.428 user 0m1.018s 00:09:58.428 sys 0m0.385s 00:09:58.428 23:51:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.428 23:51:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:58.428 ************************************ 00:09:58.428 END TEST exit_on_failed_rpc_init 00:09:58.428 ************************************ 00:09:58.428 23:51:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:09:58.428 00:09:58.428 real 0m13.139s 00:09:58.428 user 0m12.387s 00:09:58.428 sys 0m1.569s 00:09:58.428 23:51:33 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.428 23:51:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.428 ************************************ 00:09:58.428 END TEST skip_rpc 00:09:58.428 ************************************ 00:09:58.428 23:51:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client.sh 00:09:58.428 23:51:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.428 23:51:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.428 23:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:58.689 ************************************ 00:09:58.689 START TEST rpc_client 00:09:58.689 ************************************ 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client.sh 00:09:58.689 * Looking for test storage... 00:09:58.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.689 23:51:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.689 --rc genhtml_branch_coverage=1 00:09:58.689 --rc genhtml_function_coverage=1 00:09:58.689 --rc genhtml_legend=1 00:09:58.689 --rc geninfo_all_blocks=1 00:09:58.689 --rc geninfo_unexecuted_blocks=1 00:09:58.689 00:09:58.689 ' 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.689 --rc genhtml_branch_coverage=1 00:09:58.689 --rc genhtml_function_coverage=1 00:09:58.689 --rc genhtml_legend=1 00:09:58.689 --rc geninfo_all_blocks=1 00:09:58.689 --rc geninfo_unexecuted_blocks=1 00:09:58.689 00:09:58.689 ' 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.689 --rc genhtml_branch_coverage=1 00:09:58.689 --rc genhtml_function_coverage=1 00:09:58.689 --rc genhtml_legend=1 00:09:58.689 --rc geninfo_all_blocks=1 00:09:58.689 --rc geninfo_unexecuted_blocks=1 00:09:58.689 00:09:58.689 ' 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.689 --rc genhtml_branch_coverage=1 00:09:58.689 --rc genhtml_function_coverage=1 00:09:58.689 --rc genhtml_legend=1 00:09:58.689 --rc geninfo_all_blocks=1 00:09:58.689 --rc geninfo_unexecuted_blocks=1 00:09:58.689 00:09:58.689 ' 00:09:58.689 23:51:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client_test 00:09:58.689 OK 00:09:58.689 23:51:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:58.689 00:09:58.689 real 0m0.200s 00:09:58.689 user 0m0.122s 00:09:58.689 sys 0m0.090s 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.689 23:51:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:58.689 ************************************ 00:09:58.689 END TEST rpc_client 00:09:58.689 ************************************ 00:09:58.689 23:51:33 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config.sh 00:09:58.689 23:51:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.689 23:51:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.689 23:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:58.950 ************************************ 00:09:58.950 START TEST json_config 00:09:58.950 ************************************ 00:09:58.950 23:51:33 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config.sh 00:09:58.950 23:51:33 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.950 23:51:33 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.950 23:51:33 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:58.950 23:51:33 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:58.950 23:51:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.950 23:51:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.950 23:51:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.950 23:51:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.951 23:51:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.951 23:51:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.951 23:51:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.951 23:51:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.951 23:51:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.951 23:51:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.951 23:51:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.951 23:51:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:58.951 23:51:33 json_config -- scripts/common.sh@345 -- # : 1 00:09:58.951 23:51:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.951 23:51:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.951 23:51:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:58.951 23:51:33 json_config -- scripts/common.sh@353 -- # local d=1 00:09:58.951 23:51:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.951 23:51:33 json_config -- scripts/common.sh@355 -- # echo 1 00:09:58.951 23:51:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.951 23:51:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:58.951 23:51:33 json_config -- scripts/common.sh@353 -- # local d=2 00:09:58.951 23:51:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.951 23:51:33 json_config -- scripts/common.sh@355 -- # echo 2 00:09:58.951 23:51:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.951 23:51:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.951 23:51:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.951 23:51:33 json_config -- scripts/common.sh@368 -- # return 0 00:09:58.951 23:51:33 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.951 23:51:33 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:58.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.951 --rc genhtml_branch_coverage=1 00:09:58.951 --rc genhtml_function_coverage=1 00:09:58.951 --rc genhtml_legend=1 00:09:58.951 --rc geninfo_all_blocks=1 00:09:58.951 --rc geninfo_unexecuted_blocks=1 00:09:58.951 00:09:58.951 ' 00:09:58.951 23:51:33 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:58.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.951 --rc genhtml_branch_coverage=1 00:09:58.951 --rc genhtml_function_coverage=1 00:09:58.951 --rc genhtml_legend=1 00:09:58.951 --rc geninfo_all_blocks=1 00:09:58.951 --rc geninfo_unexecuted_blocks=1 00:09:58.951 00:09:58.951 ' 00:09:58.951 23:51:33 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:58.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.951 --rc genhtml_branch_coverage=1 00:09:58.951 --rc genhtml_function_coverage=1 00:09:58.951 --rc genhtml_legend=1 00:09:58.951 --rc geninfo_all_blocks=1 00:09:58.951 --rc geninfo_unexecuted_blocks=1 00:09:58.951 00:09:58.951 ' 00:09:58.951 23:51:33 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:58.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.951 --rc genhtml_branch_coverage=1 00:09:58.951 --rc genhtml_function_coverage=1 00:09:58.951 --rc genhtml_legend=1 00:09:58.951 --rc geninfo_all_blocks=1 00:09:58.951 --rc geninfo_unexecuted_blocks=1 00:09:58.951 00:09:58.951 ' 00:09:58.951 23:51:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:09:58.951 23:51:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.951 23:51:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.951 23:51:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.951 23:51:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.951 23:51:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.951 23:51:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.951 23:51:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.951 23:51:33 json_config -- paths/export.sh@5 -- # export PATH 00:09:58.951 23:51:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@51 -- # : 0 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.951 23:51:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.952 23:51:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.952 23:51:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.952 23:51:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.952 23:51:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.952 23:51:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.952 23:51:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/common.sh 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_initiator_config.json') 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:58.952 INFO: JSON configuration test init 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:58.952 23:51:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.952 23:51:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:58.952 23:51:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.952 23:51:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:58.952 23:51:33 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:58.952 23:51:33 json_config -- json_config/common.sh@9 -- # local app=target 00:09:58.952 23:51:33 json_config -- json_config/common.sh@10 -- # shift 00:09:58.952 23:51:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:58.952 23:51:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:58.952 23:51:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:58.952 23:51:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:58.952 23:51:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:58.952 23:51:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=176620 00:09:58.952 23:51:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:58.952 Waiting for target to run... 00:09:58.952 23:51:33 json_config -- json_config/common.sh@25 -- # waitforlisten 176620 /var/tmp/spdk_tgt.sock 00:09:58.952 23:51:33 json_config -- common/autotest_common.sh@835 -- # '[' -z 176620 ']' 00:09:58.952 23:51:33 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:58.952 23:51:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:58.952 23:51:33 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.952 23:51:33 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:58.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:58.952 23:51:33 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.952 23:51:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:59.212 [2024-12-09 23:51:33.901746] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:09:59.212 [2024-12-09 23:51:33.901794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176620 ] 00:09:59.471 [2024-12-09 23:51:34.354986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.471 [2024-12-09 23:51:34.405032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.040 23:51:34 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.040 23:51:34 json_config -- common/autotest_common.sh@868 -- # return 0 00:10:00.040 23:51:34 json_config -- json_config/common.sh@26 -- # echo '' 00:10:00.040 00:10:00.040 23:51:34 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:10:00.040 23:51:34 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:10:00.040 23:51:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.040 23:51:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:00.040 23:51:34 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:10:00.040 23:51:34 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:10:00.040 23:51:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.040 23:51:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:00.040 23:51:34 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:10:00.040 23:51:34 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:10:00.040 23:51:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:10:03.357 23:51:37 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:10:03.357 23:51:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:10:03.357 23:51:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.357 23:51:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.357 23:51:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:10:03.357 23:51:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:10:03.357 23:51:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:10:03.357 23:51:37 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:10:03.357 23:51:37 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:10:03.357 23:51:37 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:10:03.357 23:51:37 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:10:03.357 23:51:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@51 -- # local get_types 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@54 -- # sort 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:10:03.357 23:51:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.357 23:51:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@62 -- # return 0 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:10:03.357 23:51:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.357 23:51:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:10:03.357 23:51:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:03.357 23:51:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:10:03.357 MallocForNvmf0 00:10:03.617 23:51:38 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:03.617 23:51:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:10:03.617 MallocForNvmf1 00:10:03.617 23:51:38 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:10:03.617 23:51:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:10:03.877 [2024-12-09 23:51:38.673033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.877 23:51:38 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:03.877 23:51:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:04.137 23:51:38 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:04.137 23:51:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:10:04.397 23:51:39 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:04.397 23:51:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:10:04.397 23:51:39 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:04.397 23:51:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:10:04.656 [2024-12-09 23:51:39.479491] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:04.656 23:51:39 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:10:04.656 23:51:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.656 23:51:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.656 23:51:39 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:10:04.656 23:51:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.656 23:51:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.656 23:51:39 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:10:04.656 23:51:39 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:04.656 23:51:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:10:04.916 MallocBdevForConfigChangeCheck 00:10:04.916 23:51:39 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:10:04.916 23:51:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.916 23:51:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.916 23:51:39 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:10:04.916 23:51:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:05.485 23:51:40 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:10:05.486 INFO: shutting down applications... 00:10:05.486 23:51:40 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:10:05.486 23:51:40 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:10:05.486 23:51:40 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:10:05.486 23:51:40 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:06.865 Calling clear_iscsi_subsystem 00:10:06.865 Calling clear_nvmf_subsystem 00:10:06.865 Calling clear_nbd_subsystem 00:10:06.865 Calling clear_ublk_subsystem 00:10:06.865 Calling clear_vhost_blk_subsystem 00:10:06.865 Calling clear_vhost_scsi_subsystem 00:10:06.865 Calling clear_bdev_subsystem 00:10:06.865 23:51:41 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py 00:10:06.865 23:51:41 json_config -- json_config/json_config.sh@350 -- # count=100 00:10:06.865 23:51:41 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:10:06.865 23:51:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:06.865 23:51:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method check_empty 00:10:06.865 23:51:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:07.434 23:51:42 json_config -- json_config/json_config.sh@352 -- # break 00:10:07.434 23:51:42 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:10:07.434 23:51:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:10:07.434 23:51:42 json_config -- json_config/common.sh@31 -- # local app=target 00:10:07.434 23:51:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:07.434 23:51:42 json_config -- json_config/common.sh@35 -- # [[ -n 176620 ]] 00:10:07.434 23:51:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 176620 00:10:07.434 23:51:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:07.434 23:51:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:07.434 23:51:42 json_config -- json_config/common.sh@41 -- # kill -0 176620 00:10:07.435 23:51:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:07.694 23:51:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:07.694 23:51:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:07.694 23:51:42 json_config -- json_config/common.sh@41 -- # kill -0 176620 00:10:07.694 23:51:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:07.694 23:51:42 json_config -- json_config/common.sh@43 -- # break 00:10:07.694 23:51:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:07.694 23:51:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:07.694 SPDK target shutdown done 00:10:07.694 23:51:42 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:10:07.694 INFO: relaunching applications... 00:10:07.694 23:51:42 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:10:07.694 23:51:42 json_config -- json_config/common.sh@9 -- # local app=target 00:10:07.694 23:51:42 json_config -- json_config/common.sh@10 -- # shift 00:10:07.953 23:51:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:07.953 23:51:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:07.953 23:51:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:07.953 23:51:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:07.953 23:51:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:07.954 23:51:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=178144 00:10:07.954 23:51:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:07.954 Waiting for target to run... 00:10:07.954 23:51:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:10:07.954 23:51:42 json_config -- json_config/common.sh@25 -- # waitforlisten 178144 /var/tmp/spdk_tgt.sock 00:10:07.954 23:51:42 json_config -- common/autotest_common.sh@835 -- # '[' -z 178144 ']' 00:10:07.954 23:51:42 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:07.954 23:51:42 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.954 23:51:42 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:07.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:07.954 23:51:42 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.954 23:51:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:07.954 [2024-12-09 23:51:42.685192] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:07.954 [2024-12-09 23:51:42.685254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178144 ] 00:10:08.213 [2024-12-09 23:51:43.140199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.473 [2024-12-09 23:51:43.191788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.771 [2024-12-09 23:51:46.218999] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.771 [2024-12-09 23:51:46.251325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:12.031 23:51:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.031 23:51:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:10:12.031 23:51:46 json_config -- json_config/common.sh@26 -- # echo '' 00:10:12.031 00:10:12.031 23:51:46 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:10:12.031 23:51:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:12.031 INFO: Checking if target configuration is the same... 00:10:12.031 23:51:46 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:10:12.031 23:51:46 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:10:12.031 23:51:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:12.031 + '[' 2 -ne 2 ']' 00:10:12.031 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh 00:10:12.031 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/../.. 00:10:12.031 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:10:12.031 +++ basename /dev/fd/62 00:10:12.031 ++ mktemp /tmp/62.XXX 00:10:12.031 + tmp_file_1=/tmp/62.oha 00:10:12.031 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:10:12.031 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:12.031 + tmp_file_2=/tmp/spdk_tgt_config.json.ADW 00:10:12.031 + ret=0 00:10:12.031 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:10:12.599 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:10:12.599 + diff -u /tmp/62.oha /tmp/spdk_tgt_config.json.ADW 00:10:12.599 + echo 'INFO: JSON config files are the same' 00:10:12.599 INFO: JSON config files are the same 00:10:12.599 + rm /tmp/62.oha /tmp/spdk_tgt_config.json.ADW 00:10:12.599 + exit 0 00:10:12.599 23:51:47 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:10:12.599 23:51:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:12.599 INFO: changing configuration and checking if this can be detected... 00:10:12.599 23:51:47 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:12.599 23:51:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:12.599 23:51:47 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:10:12.599 23:51:47 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:10:12.599 23:51:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:12.599 + '[' 2 -ne 2 ']' 00:10:12.599 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh 00:10:12.599 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/../.. 00:10:12.599 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:10:12.599 +++ basename /dev/fd/62 00:10:12.599 ++ mktemp /tmp/62.XXX 00:10:12.599 + tmp_file_1=/tmp/62.B25 00:10:12.599 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:10:12.599 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:12.599 + tmp_file_2=/tmp/spdk_tgt_config.json.nis 00:10:12.599 + ret=0 00:10:12.599 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:10:13.168 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:10:13.168 + diff -u /tmp/62.B25 /tmp/spdk_tgt_config.json.nis 00:10:13.168 + ret=1 00:10:13.168 + echo '=== Start of file: /tmp/62.B25 ===' 00:10:13.168 + cat /tmp/62.B25 00:10:13.168 + echo '=== End of file: /tmp/62.B25 ===' 00:10:13.168 + echo '' 00:10:13.168 + echo '=== Start of file: /tmp/spdk_tgt_config.json.nis ===' 00:10:13.168 + cat /tmp/spdk_tgt_config.json.nis 00:10:13.169 + echo '=== End of file: /tmp/spdk_tgt_config.json.nis ===' 00:10:13.169 + echo '' 00:10:13.169 + rm /tmp/62.B25 /tmp/spdk_tgt_config.json.nis 00:10:13.169 + exit 1 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:10:13.169 INFO: configuration change detected. 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:10:13.169 23:51:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.169 23:51:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@324 -- # [[ -n 178144 ]] 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:10:13.169 23:51:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.169 23:51:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@200 -- # uname -s 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:10:13.169 23:51:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.169 23:51:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:13.169 23:51:47 json_config -- json_config/json_config.sh@330 -- # killprocess 178144 00:10:13.169 23:51:47 json_config -- common/autotest_common.sh@954 -- # '[' -z 178144 ']' 00:10:13.169 23:51:47 json_config -- common/autotest_common.sh@958 -- # kill -0 178144 00:10:13.169 23:51:47 json_config -- common/autotest_common.sh@959 -- # uname 00:10:13.169 23:51:47 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.169 23:51:47 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 178144 00:10:13.169 23:51:48 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.169 23:51:48 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.169 23:51:48 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 178144' 00:10:13.169 killing process with pid 178144 00:10:13.169 23:51:48 json_config -- common/autotest_common.sh@973 -- # kill 178144 00:10:13.169 23:51:48 json_config -- common/autotest_common.sh@978 -- # wait 178144 00:10:15.077 23:51:49 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:10:15.077 23:51:49 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:10:15.077 23:51:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.077 23:51:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:15.077 23:51:49 json_config -- json_config/json_config.sh@335 -- # return 0 00:10:15.077 23:51:49 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:10:15.077 INFO: Success 00:10:15.077 00:10:15.077 real 0m15.888s 00:10:15.077 user 0m16.378s 00:10:15.077 sys 0m2.740s 00:10:15.077 23:51:49 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.077 23:51:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:15.077 ************************************ 00:10:15.077 END TEST json_config 00:10:15.077 ************************************ 00:10:15.077 23:51:49 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config_extra_key.sh 00:10:15.077 23:51:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:15.077 23:51:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.077 23:51:49 -- common/autotest_common.sh@10 -- # set +x 00:10:15.077 ************************************ 00:10:15.077 START TEST json_config_extra_key 00:10:15.077 ************************************ 00:10:15.077 23:51:49 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config_extra_key.sh 00:10:15.077 23:51:49 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:15.077 23:51:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:10:15.077 23:51:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:15.077 23:51:49 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:15.077 23:51:49 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.078 23:51:49 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:15.078 23:51:49 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.078 23:51:49 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.078 23:51:49 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.078 23:51:49 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:15.078 23:51:49 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.078 23:51:49 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:15.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.078 --rc genhtml_branch_coverage=1 00:10:15.078 --rc genhtml_function_coverage=1 00:10:15.078 --rc genhtml_legend=1 00:10:15.078 --rc geninfo_all_blocks=1 00:10:15.078 --rc geninfo_unexecuted_blocks=1 00:10:15.078 00:10:15.078 ' 00:10:15.078 23:51:49 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:15.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.078 --rc genhtml_branch_coverage=1 00:10:15.078 --rc genhtml_function_coverage=1 00:10:15.078 --rc genhtml_legend=1 00:10:15.078 --rc geninfo_all_blocks=1 00:10:15.078 --rc geninfo_unexecuted_blocks=1 00:10:15.078 00:10:15.078 ' 00:10:15.078 23:51:49 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:15.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.078 --rc genhtml_branch_coverage=1 00:10:15.078 --rc genhtml_function_coverage=1 00:10:15.078 --rc genhtml_legend=1 00:10:15.078 --rc geninfo_all_blocks=1 00:10:15.078 --rc geninfo_unexecuted_blocks=1 00:10:15.078 00:10:15.078 ' 00:10:15.078 23:51:49 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:15.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.078 --rc genhtml_branch_coverage=1 00:10:15.078 --rc genhtml_function_coverage=1 00:10:15.078 --rc genhtml_legend=1 00:10:15.078 --rc geninfo_all_blocks=1 00:10:15.078 --rc geninfo_unexecuted_blocks=1 00:10:15.078 00:10:15.078 ' 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:15.078 23:51:49 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.078 23:51:49 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.078 23:51:49 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.078 23:51:49 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.078 23:51:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.078 23:51:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.078 23:51:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.078 23:51:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:15.078 23:51:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.078 23:51:49 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/common.sh 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json') 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:15.078 INFO: launching applications... 00:10:15.078 23:51:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json 00:10:15.078 23:51:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:15.078 23:51:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:15.078 23:51:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:15.078 23:51:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:15.078 23:51:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:15.078 23:51:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:15.078 23:51:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:15.078 23:51:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=179466 00:10:15.078 23:51:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:15.078 Waiting for target to run... 00:10:15.078 23:51:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 179466 /var/tmp/spdk_tgt.sock 00:10:15.078 23:51:49 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 179466 ']' 00:10:15.078 23:51:49 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json 00:10:15.078 23:51:49 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:15.078 23:51:49 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.078 23:51:49 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:15.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:15.078 23:51:49 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.078 23:51:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:15.078 [2024-12-09 23:51:49.859350] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:15.078 [2024-12-09 23:51:49.859399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179466 ] 00:10:15.338 [2024-12-09 23:51:50.147807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.338 [2024-12-09 23:51:50.182278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.906 23:51:50 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.906 23:51:50 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:15.906 23:51:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:15.906 00:10:15.906 23:51:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:15.906 INFO: shutting down applications... 00:10:15.906 23:51:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:15.906 23:51:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:15.906 23:51:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:15.906 23:51:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 179466 ]] 00:10:15.906 23:51:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 179466 00:10:15.906 23:51:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:15.906 23:51:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:15.906 23:51:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 179466 00:10:15.906 23:51:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:16.476 23:51:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:16.476 23:51:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:16.476 23:51:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 179466 00:10:16.476 23:51:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:16.476 23:51:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:16.476 23:51:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:16.476 23:51:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:16.476 SPDK target shutdown done 00:10:16.476 23:51:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:16.476 Success 00:10:16.476 00:10:16.476 real 0m1.583s 00:10:16.476 user 0m1.360s 00:10:16.476 sys 0m0.414s 00:10:16.476 23:51:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.476 23:51:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:16.476 ************************************ 00:10:16.476 END TEST json_config_extra_key 00:10:16.476 ************************************ 00:10:16.476 23:51:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:16.476 23:51:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.476 23:51:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.476 23:51:51 -- common/autotest_common.sh@10 -- # set +x 00:10:16.476 ************************************ 00:10:16.476 START TEST alias_rpc 00:10:16.476 ************************************ 00:10:16.476 23:51:51 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:16.476 * Looking for test storage... 00:10:16.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc 00:10:16.476 23:51:51 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:16.476 23:51:51 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:16.476 23:51:51 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.737 23:51:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.737 --rc genhtml_branch_coverage=1 00:10:16.737 --rc genhtml_function_coverage=1 00:10:16.737 --rc genhtml_legend=1 00:10:16.737 --rc geninfo_all_blocks=1 00:10:16.737 --rc geninfo_unexecuted_blocks=1 00:10:16.737 00:10:16.737 ' 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.737 --rc genhtml_branch_coverage=1 00:10:16.737 --rc genhtml_function_coverage=1 00:10:16.737 --rc genhtml_legend=1 00:10:16.737 --rc geninfo_all_blocks=1 00:10:16.737 --rc geninfo_unexecuted_blocks=1 00:10:16.737 00:10:16.737 ' 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.737 --rc genhtml_branch_coverage=1 00:10:16.737 --rc genhtml_function_coverage=1 00:10:16.737 --rc genhtml_legend=1 00:10:16.737 --rc geninfo_all_blocks=1 00:10:16.737 --rc geninfo_unexecuted_blocks=1 00:10:16.737 00:10:16.737 ' 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.737 --rc genhtml_branch_coverage=1 00:10:16.737 --rc genhtml_function_coverage=1 00:10:16.737 --rc genhtml_legend=1 00:10:16.737 --rc geninfo_all_blocks=1 00:10:16.737 --rc geninfo_unexecuted_blocks=1 00:10:16.737 00:10:16.737 ' 00:10:16.737 23:51:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:16.737 23:51:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=179912 00:10:16.737 23:51:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:10:16.737 23:51:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 179912 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 179912 ']' 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.737 23:51:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.737 [2024-12-09 23:51:51.513348] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:16.737 [2024-12-09 23:51:51.513397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179912 ] 00:10:16.737 [2024-12-09 23:51:51.587417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.737 [2024-12-09 23:51:51.628373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.997 23:51:51 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.997 23:51:51 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.997 23:51:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py load_config -i 00:10:17.256 23:51:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 179912 00:10:17.256 23:51:52 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 179912 ']' 00:10:17.256 23:51:52 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 179912 00:10:17.256 23:51:52 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:17.256 23:51:52 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.256 23:51:52 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 179912 00:10:17.256 23:51:52 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.256 23:51:52 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.256 23:51:52 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 179912' 00:10:17.256 killing process with pid 179912 00:10:17.256 23:51:52 alias_rpc -- common/autotest_common.sh@973 -- # kill 179912 00:10:17.257 23:51:52 alias_rpc -- common/autotest_common.sh@978 -- # wait 179912 00:10:17.516 00:10:17.516 real 0m1.137s 00:10:17.516 user 0m1.144s 00:10:17.516 sys 0m0.434s 00:10:17.516 23:51:52 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.516 23:51:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.516 ************************************ 00:10:17.516 END TEST alias_rpc 00:10:17.516 ************************************ 00:10:17.516 23:51:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:17.516 23:51:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/tcp.sh 00:10:17.516 23:51:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:17.776 23:51:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.776 23:51:52 -- common/autotest_common.sh@10 -- # set +x 00:10:17.776 ************************************ 00:10:17.776 START TEST spdkcli_tcp 00:10:17.776 ************************************ 00:10:17.776 23:51:52 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/tcp.sh 00:10:17.776 * Looking for test storage... 00:10:17.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli 00:10:17.776 23:51:52 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.776 23:51:52 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.776 23:51:52 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.776 23:51:52 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.776 23:51:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:17.776 23:51:52 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.776 23:51:52 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.776 --rc genhtml_branch_coverage=1 00:10:17.776 --rc genhtml_function_coverage=1 00:10:17.776 --rc genhtml_legend=1 00:10:17.776 --rc geninfo_all_blocks=1 00:10:17.776 --rc geninfo_unexecuted_blocks=1 00:10:17.776 00:10:17.776 ' 00:10:17.776 23:51:52 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.776 --rc genhtml_branch_coverage=1 00:10:17.776 --rc genhtml_function_coverage=1 00:10:17.776 --rc genhtml_legend=1 00:10:17.776 --rc geninfo_all_blocks=1 00:10:17.776 --rc geninfo_unexecuted_blocks=1 00:10:17.776 00:10:17.776 ' 00:10:17.776 23:51:52 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.776 --rc genhtml_branch_coverage=1 00:10:17.776 --rc genhtml_function_coverage=1 00:10:17.776 --rc genhtml_legend=1 00:10:17.776 --rc geninfo_all_blocks=1 00:10:17.776 --rc geninfo_unexecuted_blocks=1 00:10:17.776 00:10:17.776 ' 00:10:17.777 23:51:52 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.777 --rc genhtml_branch_coverage=1 00:10:17.777 --rc genhtml_function_coverage=1 00:10:17.777 --rc genhtml_legend=1 00:10:17.777 --rc geninfo_all_blocks=1 00:10:17.777 --rc geninfo_unexecuted_blocks=1 00:10:17.777 00:10:17.777 ' 00:10:17.777 23:51:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/common.sh 00:10:17.777 23:51:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py 00:10:17.777 23:51:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py 00:10:17.777 23:51:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:17.777 23:51:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:17.777 23:51:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:17.777 23:51:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:17.777 23:51:52 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.777 23:51:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:17.777 23:51:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=180098 00:10:17.777 23:51:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 180098 00:10:17.777 23:51:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:17.777 23:51:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 180098 ']' 00:10:17.777 23:51:52 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.777 23:51:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.777 23:51:52 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.777 23:51:52 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.777 23:51:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.037 [2024-12-09 23:51:52.715643] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:18.037 [2024-12-09 23:51:52.715694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180098 ] 00:10:18.037 [2024-12-09 23:51:52.791666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:18.037 [2024-12-09 23:51:52.834125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.037 [2024-12-09 23:51:52.834128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.296 23:51:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.296 23:51:53 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:18.296 23:51:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=180226 00:10:18.296 23:51:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:18.296 23:51:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:18.555 [ 00:10:18.555 "bdev_malloc_delete", 00:10:18.555 "bdev_malloc_create", 00:10:18.555 "bdev_null_resize", 00:10:18.555 "bdev_null_delete", 00:10:18.555 "bdev_null_create", 00:10:18.555 "bdev_nvme_cuse_unregister", 00:10:18.555 "bdev_nvme_cuse_register", 00:10:18.555 "bdev_opal_new_user", 00:10:18.555 "bdev_opal_set_lock_state", 00:10:18.555 "bdev_opal_delete", 00:10:18.555 "bdev_opal_get_info", 00:10:18.555 "bdev_opal_create", 00:10:18.555 "bdev_nvme_opal_revert", 00:10:18.555 "bdev_nvme_opal_init", 00:10:18.555 "bdev_nvme_send_cmd", 00:10:18.555 "bdev_nvme_set_keys", 00:10:18.555 "bdev_nvme_get_path_iostat", 00:10:18.555 "bdev_nvme_get_mdns_discovery_info", 00:10:18.555 "bdev_nvme_stop_mdns_discovery", 00:10:18.555 "bdev_nvme_start_mdns_discovery", 00:10:18.555 "bdev_nvme_set_multipath_policy", 00:10:18.555 "bdev_nvme_set_preferred_path", 00:10:18.555 "bdev_nvme_get_io_paths", 00:10:18.555 "bdev_nvme_remove_error_injection", 00:10:18.555 "bdev_nvme_add_error_injection", 00:10:18.555 "bdev_nvme_get_discovery_info", 00:10:18.555 "bdev_nvme_stop_discovery", 00:10:18.555 "bdev_nvme_start_discovery", 00:10:18.555 "bdev_nvme_get_controller_health_info", 00:10:18.555 "bdev_nvme_disable_controller", 00:10:18.555 "bdev_nvme_enable_controller", 00:10:18.555 "bdev_nvme_reset_controller", 00:10:18.555 "bdev_nvme_get_transport_statistics", 00:10:18.555 "bdev_nvme_apply_firmware", 00:10:18.555 "bdev_nvme_detach_controller", 00:10:18.555 "bdev_nvme_get_controllers", 00:10:18.555 "bdev_nvme_attach_controller", 00:10:18.555 "bdev_nvme_set_hotplug", 00:10:18.555 "bdev_nvme_set_options", 00:10:18.555 "bdev_passthru_delete", 00:10:18.555 "bdev_passthru_create", 00:10:18.555 "bdev_lvol_set_parent_bdev", 00:10:18.555 "bdev_lvol_set_parent", 00:10:18.555 "bdev_lvol_check_shallow_copy", 00:10:18.555 "bdev_lvol_start_shallow_copy", 00:10:18.555 "bdev_lvol_grow_lvstore", 00:10:18.555 "bdev_lvol_get_lvols", 00:10:18.555 "bdev_lvol_get_lvstores", 00:10:18.555 "bdev_lvol_delete", 00:10:18.555 "bdev_lvol_set_read_only", 00:10:18.555 "bdev_lvol_resize", 00:10:18.555 "bdev_lvol_decouple_parent", 00:10:18.555 "bdev_lvol_inflate", 00:10:18.555 "bdev_lvol_rename", 00:10:18.555 "bdev_lvol_clone_bdev", 00:10:18.555 "bdev_lvol_clone", 00:10:18.555 "bdev_lvol_snapshot", 00:10:18.555 "bdev_lvol_create", 00:10:18.555 "bdev_lvol_delete_lvstore", 00:10:18.555 "bdev_lvol_rename_lvstore", 00:10:18.555 "bdev_lvol_create_lvstore", 00:10:18.555 "bdev_raid_set_options", 00:10:18.555 "bdev_raid_remove_base_bdev", 00:10:18.555 "bdev_raid_add_base_bdev", 00:10:18.555 "bdev_raid_delete", 00:10:18.555 "bdev_raid_create", 00:10:18.555 "bdev_raid_get_bdevs", 00:10:18.555 "bdev_error_inject_error", 00:10:18.555 "bdev_error_delete", 00:10:18.555 "bdev_error_create", 00:10:18.555 "bdev_split_delete", 00:10:18.555 "bdev_split_create", 00:10:18.555 "bdev_delay_delete", 00:10:18.555 "bdev_delay_create", 00:10:18.555 "bdev_delay_update_latency", 00:10:18.555 "bdev_zone_block_delete", 00:10:18.555 "bdev_zone_block_create", 00:10:18.555 "blobfs_create", 00:10:18.555 "blobfs_detect", 00:10:18.555 "blobfs_set_cache_size", 00:10:18.555 "bdev_aio_delete", 00:10:18.555 "bdev_aio_rescan", 00:10:18.555 "bdev_aio_create", 00:10:18.555 "bdev_ftl_set_property", 00:10:18.555 "bdev_ftl_get_properties", 00:10:18.555 "bdev_ftl_get_stats", 00:10:18.555 "bdev_ftl_unmap", 00:10:18.555 "bdev_ftl_unload", 00:10:18.555 "bdev_ftl_delete", 00:10:18.555 "bdev_ftl_load", 00:10:18.555 "bdev_ftl_create", 00:10:18.555 "bdev_virtio_attach_controller", 00:10:18.555 "bdev_virtio_scsi_get_devices", 00:10:18.555 "bdev_virtio_detach_controller", 00:10:18.555 "bdev_virtio_blk_set_hotplug", 00:10:18.555 "bdev_iscsi_delete", 00:10:18.555 "bdev_iscsi_create", 00:10:18.555 "bdev_iscsi_set_options", 00:10:18.555 "accel_error_inject_error", 00:10:18.555 "ioat_scan_accel_module", 00:10:18.555 "dsa_scan_accel_module", 00:10:18.555 "iaa_scan_accel_module", 00:10:18.555 "vfu_virtio_create_fs_endpoint", 00:10:18.555 "vfu_virtio_create_scsi_endpoint", 00:10:18.555 "vfu_virtio_scsi_remove_target", 00:10:18.555 "vfu_virtio_scsi_add_target", 00:10:18.555 "vfu_virtio_create_blk_endpoint", 00:10:18.555 "vfu_virtio_delete_endpoint", 00:10:18.555 "keyring_file_remove_key", 00:10:18.555 "keyring_file_add_key", 00:10:18.555 "keyring_linux_set_options", 00:10:18.555 "fsdev_aio_delete", 00:10:18.555 "fsdev_aio_create", 00:10:18.555 "iscsi_get_histogram", 00:10:18.555 "iscsi_enable_histogram", 00:10:18.555 "iscsi_set_options", 00:10:18.555 "iscsi_get_auth_groups", 00:10:18.555 "iscsi_auth_group_remove_secret", 00:10:18.555 "iscsi_auth_group_add_secret", 00:10:18.555 "iscsi_delete_auth_group", 00:10:18.555 "iscsi_create_auth_group", 00:10:18.555 "iscsi_set_discovery_auth", 00:10:18.555 "iscsi_get_options", 00:10:18.555 "iscsi_target_node_request_logout", 00:10:18.555 "iscsi_target_node_set_redirect", 00:10:18.555 "iscsi_target_node_set_auth", 00:10:18.555 "iscsi_target_node_add_lun", 00:10:18.555 "iscsi_get_stats", 00:10:18.556 "iscsi_get_connections", 00:10:18.556 "iscsi_portal_group_set_auth", 00:10:18.556 "iscsi_start_portal_group", 00:10:18.556 "iscsi_delete_portal_group", 00:10:18.556 "iscsi_create_portal_group", 00:10:18.556 "iscsi_get_portal_groups", 00:10:18.556 "iscsi_delete_target_node", 00:10:18.556 "iscsi_target_node_remove_pg_ig_maps", 00:10:18.556 "iscsi_target_node_add_pg_ig_maps", 00:10:18.556 "iscsi_create_target_node", 00:10:18.556 "iscsi_get_target_nodes", 00:10:18.556 "iscsi_delete_initiator_group", 00:10:18.556 "iscsi_initiator_group_remove_initiators", 00:10:18.556 "iscsi_initiator_group_add_initiators", 00:10:18.556 "iscsi_create_initiator_group", 00:10:18.556 "iscsi_get_initiator_groups", 00:10:18.556 "nvmf_set_crdt", 00:10:18.556 "nvmf_set_config", 00:10:18.556 "nvmf_set_max_subsystems", 00:10:18.556 "nvmf_stop_mdns_prr", 00:10:18.556 "nvmf_publish_mdns_prr", 00:10:18.556 "nvmf_subsystem_get_listeners", 00:10:18.556 "nvmf_subsystem_get_qpairs", 00:10:18.556 "nvmf_subsystem_get_controllers", 00:10:18.556 "nvmf_get_stats", 00:10:18.556 "nvmf_get_transports", 00:10:18.556 "nvmf_create_transport", 00:10:18.556 "nvmf_get_targets", 00:10:18.556 "nvmf_delete_target", 00:10:18.556 "nvmf_create_target", 00:10:18.556 "nvmf_subsystem_allow_any_host", 00:10:18.556 "nvmf_subsystem_set_keys", 00:10:18.556 "nvmf_subsystem_remove_host", 00:10:18.556 "nvmf_subsystem_add_host", 00:10:18.556 "nvmf_ns_remove_host", 00:10:18.556 "nvmf_ns_add_host", 00:10:18.556 "nvmf_subsystem_remove_ns", 00:10:18.556 "nvmf_subsystem_set_ns_ana_group", 00:10:18.556 "nvmf_subsystem_add_ns", 00:10:18.556 "nvmf_subsystem_listener_set_ana_state", 00:10:18.556 "nvmf_discovery_get_referrals", 00:10:18.556 "nvmf_discovery_remove_referral", 00:10:18.556 "nvmf_discovery_add_referral", 00:10:18.556 "nvmf_subsystem_remove_listener", 00:10:18.556 "nvmf_subsystem_add_listener", 00:10:18.556 "nvmf_delete_subsystem", 00:10:18.556 "nvmf_create_subsystem", 00:10:18.556 "nvmf_get_subsystems", 00:10:18.556 "env_dpdk_get_mem_stats", 00:10:18.556 "nbd_get_disks", 00:10:18.556 "nbd_stop_disk", 00:10:18.556 "nbd_start_disk", 00:10:18.556 "ublk_recover_disk", 00:10:18.556 "ublk_get_disks", 00:10:18.556 "ublk_stop_disk", 00:10:18.556 "ublk_start_disk", 00:10:18.556 "ublk_destroy_target", 00:10:18.556 "ublk_create_target", 00:10:18.556 "virtio_blk_create_transport", 00:10:18.556 "virtio_blk_get_transports", 00:10:18.556 "vhost_controller_set_coalescing", 00:10:18.556 "vhost_get_controllers", 00:10:18.556 "vhost_delete_controller", 00:10:18.556 "vhost_create_blk_controller", 00:10:18.556 "vhost_scsi_controller_remove_target", 00:10:18.556 "vhost_scsi_controller_add_target", 00:10:18.556 "vhost_start_scsi_controller", 00:10:18.556 "vhost_create_scsi_controller", 00:10:18.556 "thread_set_cpumask", 00:10:18.556 "scheduler_set_options", 00:10:18.556 "framework_get_governor", 00:10:18.556 "framework_get_scheduler", 00:10:18.556 "framework_set_scheduler", 00:10:18.556 "framework_get_reactors", 00:10:18.556 "thread_get_io_channels", 00:10:18.556 "thread_get_pollers", 00:10:18.556 "thread_get_stats", 00:10:18.556 "framework_monitor_context_switch", 00:10:18.556 "spdk_kill_instance", 00:10:18.556 "log_enable_timestamps", 00:10:18.556 "log_get_flags", 00:10:18.556 "log_clear_flag", 00:10:18.556 "log_set_flag", 00:10:18.556 "log_get_level", 00:10:18.556 "log_set_level", 00:10:18.556 "log_get_print_level", 00:10:18.556 "log_set_print_level", 00:10:18.556 "framework_enable_cpumask_locks", 00:10:18.556 "framework_disable_cpumask_locks", 00:10:18.556 "framework_wait_init", 00:10:18.556 "framework_start_init", 00:10:18.556 "scsi_get_devices", 00:10:18.556 "bdev_get_histogram", 00:10:18.556 "bdev_enable_histogram", 00:10:18.556 "bdev_set_qos_limit", 00:10:18.556 "bdev_set_qd_sampling_period", 00:10:18.556 "bdev_get_bdevs", 00:10:18.556 "bdev_reset_iostat", 00:10:18.556 "bdev_get_iostat", 00:10:18.556 "bdev_examine", 00:10:18.556 "bdev_wait_for_examine", 00:10:18.556 "bdev_set_options", 00:10:18.556 "accel_get_stats", 00:10:18.556 "accel_set_options", 00:10:18.556 "accel_set_driver", 00:10:18.556 "accel_crypto_key_destroy", 00:10:18.556 "accel_crypto_keys_get", 00:10:18.556 "accel_crypto_key_create", 00:10:18.556 "accel_assign_opc", 00:10:18.556 "accel_get_module_info", 00:10:18.556 "accel_get_opc_assignments", 00:10:18.556 "vmd_rescan", 00:10:18.556 "vmd_remove_device", 00:10:18.556 "vmd_enable", 00:10:18.556 "sock_get_default_impl", 00:10:18.556 "sock_set_default_impl", 00:10:18.556 "sock_impl_set_options", 00:10:18.556 "sock_impl_get_options", 00:10:18.556 "iobuf_get_stats", 00:10:18.556 "iobuf_set_options", 00:10:18.556 "keyring_get_keys", 00:10:18.556 "vfu_tgt_set_base_path", 00:10:18.556 "framework_get_pci_devices", 00:10:18.556 "framework_get_config", 00:10:18.556 "framework_get_subsystems", 00:10:18.556 "fsdev_set_opts", 00:10:18.556 "fsdev_get_opts", 00:10:18.556 "trace_get_info", 00:10:18.556 "trace_get_tpoint_group_mask", 00:10:18.556 "trace_disable_tpoint_group", 00:10:18.556 "trace_enable_tpoint_group", 00:10:18.556 "trace_clear_tpoint_mask", 00:10:18.556 "trace_set_tpoint_mask", 00:10:18.556 "notify_get_notifications", 00:10:18.556 "notify_get_types", 00:10:18.556 "spdk_get_version", 00:10:18.556 "rpc_get_methods" 00:10:18.556 ] 00:10:18.556 23:51:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.556 23:51:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:18.556 23:51:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 180098 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 180098 ']' 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 180098 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 180098 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 180098' 00:10:18.556 killing process with pid 180098 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 180098 00:10:18.556 23:51:53 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 180098 00:10:18.815 00:10:18.815 real 0m1.148s 00:10:18.815 user 0m1.928s 00:10:18.815 sys 0m0.460s 00:10:18.815 23:51:53 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.815 23:51:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:18.815 ************************************ 00:10:18.815 END TEST spdkcli_tcp 00:10:18.815 ************************************ 00:10:18.815 23:51:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:18.815 23:51:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.815 23:51:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.815 23:51:53 -- common/autotest_common.sh@10 -- # set +x 00:10:18.815 ************************************ 00:10:18.815 START TEST dpdk_mem_utility 00:10:18.815 ************************************ 00:10:18.816 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:19.075 * Looking for test storage... 00:10:19.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.075 23:51:53 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:19.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.075 --rc genhtml_branch_coverage=1 00:10:19.075 --rc genhtml_function_coverage=1 00:10:19.075 --rc genhtml_legend=1 00:10:19.075 --rc geninfo_all_blocks=1 00:10:19.075 --rc geninfo_unexecuted_blocks=1 00:10:19.075 00:10:19.075 ' 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:19.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.075 --rc genhtml_branch_coverage=1 00:10:19.075 --rc genhtml_function_coverage=1 00:10:19.075 --rc genhtml_legend=1 00:10:19.075 --rc geninfo_all_blocks=1 00:10:19.075 --rc geninfo_unexecuted_blocks=1 00:10:19.075 00:10:19.075 ' 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:19.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.075 --rc genhtml_branch_coverage=1 00:10:19.075 --rc genhtml_function_coverage=1 00:10:19.075 --rc genhtml_legend=1 00:10:19.075 --rc geninfo_all_blocks=1 00:10:19.075 --rc geninfo_unexecuted_blocks=1 00:10:19.075 00:10:19.075 ' 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:19.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.075 --rc genhtml_branch_coverage=1 00:10:19.075 --rc genhtml_function_coverage=1 00:10:19.075 --rc genhtml_legend=1 00:10:19.075 --rc geninfo_all_blocks=1 00:10:19.075 --rc geninfo_unexecuted_blocks=1 00:10:19.075 00:10:19.075 ' 00:10:19.075 23:51:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py 00:10:19.075 23:51:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=180315 00:10:19.075 23:51:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 180315 00:10:19.075 23:51:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 180315 ']' 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.075 23:51:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:19.075 [2024-12-09 23:51:53.924448] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:19.075 [2024-12-09 23:51:53.924501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180315 ] 00:10:19.075 [2024-12-09 23:51:54.000155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.335 [2024-12-09 23:51:54.040516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.335 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.335 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:19.335 23:51:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:19.335 23:51:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:19.595 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.595 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:19.595 { 00:10:19.595 "filename": "/tmp/spdk_mem_dump.txt" 00:10:19.595 } 00:10:19.595 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.595 23:51:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py 00:10:19.595 DPDK memory size 818.000000 MiB in 1 heap(s) 00:10:19.595 1 heaps totaling size 818.000000 MiB 00:10:19.595 size: 818.000000 MiB heap id: 0 00:10:19.595 end heaps---------- 00:10:19.595 9 mempools totaling size 603.782043 MiB 00:10:19.595 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:19.595 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:19.595 size: 100.555481 MiB name: bdev_io_180315 00:10:19.595 size: 50.003479 MiB name: msgpool_180315 00:10:19.595 size: 36.509338 MiB name: fsdev_io_180315 00:10:19.595 size: 21.763794 MiB name: PDU_Pool 00:10:19.595 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:19.595 size: 4.133484 MiB name: evtpool_180315 00:10:19.595 size: 0.026123 MiB name: Session_Pool 00:10:19.595 end mempools------- 00:10:19.595 6 memzones totaling size 4.142822 MiB 00:10:19.595 size: 1.000366 MiB name: RG_ring_0_180315 00:10:19.595 size: 1.000366 MiB name: RG_ring_1_180315 00:10:19.595 size: 1.000366 MiB name: RG_ring_4_180315 00:10:19.595 size: 1.000366 MiB name: RG_ring_5_180315 00:10:19.595 size: 0.125366 MiB name: RG_ring_2_180315 00:10:19.595 size: 0.015991 MiB name: RG_ring_3_180315 00:10:19.595 end memzones------- 00:10:19.595 23:51:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py -m 0 00:10:19.595 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:10:19.595 list of free elements. size: 10.852478 MiB 00:10:19.595 element at address: 0x200019200000 with size: 0.999878 MiB 00:10:19.595 element at address: 0x200019400000 with size: 0.999878 MiB 00:10:19.595 element at address: 0x200000400000 with size: 0.998535 MiB 00:10:19.595 element at address: 0x200032000000 with size: 0.994446 MiB 00:10:19.595 element at address: 0x200006400000 with size: 0.959839 MiB 00:10:19.595 element at address: 0x200012c00000 with size: 0.944275 MiB 00:10:19.595 element at address: 0x200019600000 with size: 0.936584 MiB 00:10:19.595 element at address: 0x200000200000 with size: 0.717346 MiB 00:10:19.595 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:10:19.595 element at address: 0x200000c00000 with size: 0.495422 MiB 00:10:19.595 element at address: 0x20000a600000 with size: 0.490723 MiB 00:10:19.595 element at address: 0x200019800000 with size: 0.485657 MiB 00:10:19.595 element at address: 0x200003e00000 with size: 0.481934 MiB 00:10:19.595 element at address: 0x200028200000 with size: 0.410034 MiB 00:10:19.595 element at address: 0x200000800000 with size: 0.355042 MiB 00:10:19.595 list of standard malloc elements. size: 199.218628 MiB 00:10:19.595 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:10:19.595 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:10:19.595 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:19.595 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:10:19.595 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:10:19.595 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:19.595 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:10:19.595 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:19.595 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:10:19.595 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20000085b040 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20000085f300 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20000087f680 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:10:19.595 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:10:19.595 element at address: 0x200000cff000 with size: 0.000183 MiB 00:10:19.595 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:10:19.595 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:10:19.595 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:10:19.595 element at address: 0x200003efb980 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:10:19.595 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:10:19.595 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:10:19.595 element at address: 0x200028268f80 with size: 0.000183 MiB 00:10:19.595 element at address: 0x200028269040 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:10:19.595 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:10:19.595 list of memzone associated elements. size: 607.928894 MiB 00:10:19.595 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:10:19.595 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:19.595 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:10:19.595 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:19.595 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:10:19.595 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_180315_0 00:10:19.595 element at address: 0x200000dff380 with size: 48.003052 MiB 00:10:19.595 associated memzone info: size: 48.002930 MiB name: MP_msgpool_180315_0 00:10:19.595 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:10:19.595 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_180315_0 00:10:19.595 element at address: 0x2000199be940 with size: 20.255554 MiB 00:10:19.595 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:19.595 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:10:19.595 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:19.595 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:10:19.595 associated memzone info: size: 3.000122 MiB name: MP_evtpool_180315_0 00:10:19.595 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:10:19.595 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_180315 00:10:19.595 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:19.595 associated memzone info: size: 1.007996 MiB name: MP_evtpool_180315 00:10:19.595 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:10:19.595 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:19.595 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:10:19.596 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:19.596 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:10:19.596 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:19.596 element at address: 0x200003efba40 with size: 1.008118 MiB 00:10:19.596 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:19.596 element at address: 0x200000cff180 with size: 1.000488 MiB 00:10:19.596 associated memzone info: size: 1.000366 MiB name: RG_ring_0_180315 00:10:19.596 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:10:19.596 associated memzone info: size: 1.000366 MiB name: RG_ring_1_180315 00:10:19.596 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:10:19.596 associated memzone info: size: 1.000366 MiB name: RG_ring_4_180315 00:10:19.596 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:10:19.596 associated memzone info: size: 1.000366 MiB name: RG_ring_5_180315 00:10:19.596 element at address: 0x20000087f740 with size: 0.500488 MiB 00:10:19.596 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_180315 00:10:19.596 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:10:19.596 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_180315 00:10:19.596 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:10:19.596 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:19.596 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:10:19.596 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:19.596 element at address: 0x20001987c540 with size: 0.250488 MiB 00:10:19.596 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:19.596 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:10:19.596 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_180315 00:10:19.596 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:10:19.596 associated memzone info: size: 0.125366 MiB name: RG_ring_2_180315 00:10:19.596 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:10:19.596 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:19.596 element at address: 0x200028269100 with size: 0.023743 MiB 00:10:19.596 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:19.596 element at address: 0x20000085b100 with size: 0.016113 MiB 00:10:19.596 associated memzone info: size: 0.015991 MiB name: RG_ring_3_180315 00:10:19.596 element at address: 0x20002826f240 with size: 0.002441 MiB 00:10:19.596 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:19.596 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:10:19.596 associated memzone info: size: 0.000183 MiB name: MP_msgpool_180315 00:10:19.596 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:10:19.596 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_180315 00:10:19.596 element at address: 0x20000085af00 with size: 0.000305 MiB 00:10:19.596 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_180315 00:10:19.596 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:10:19.596 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:19.596 23:51:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:19.596 23:51:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 180315 00:10:19.596 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 180315 ']' 00:10:19.596 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 180315 00:10:19.596 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:19.596 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.596 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 180315 00:10:19.596 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.596 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.596 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 180315' 00:10:19.596 killing process with pid 180315 00:10:19.596 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 180315 00:10:19.596 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 180315 00:10:19.855 00:10:19.855 real 0m1.044s 00:10:19.855 user 0m0.972s 00:10:19.855 sys 0m0.427s 00:10:19.855 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.855 23:51:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:19.855 ************************************ 00:10:19.855 END TEST dpdk_mem_utility 00:10:19.855 ************************************ 00:10:19.855 23:51:54 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event.sh 00:10:19.855 23:51:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:19.855 23:51:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.855 23:51:54 -- common/autotest_common.sh@10 -- # set +x 00:10:20.115 ************************************ 00:10:20.115 START TEST event 00:10:20.115 ************************************ 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event.sh 00:10:20.115 * Looking for test storage... 00:10:20.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1711 -- # lcov --version 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:20.115 23:51:54 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.115 23:51:54 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.115 23:51:54 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.115 23:51:54 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.115 23:51:54 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.115 23:51:54 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.115 23:51:54 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.115 23:51:54 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.115 23:51:54 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.115 23:51:54 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.115 23:51:54 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.115 23:51:54 event -- scripts/common.sh@344 -- # case "$op" in 00:10:20.115 23:51:54 event -- scripts/common.sh@345 -- # : 1 00:10:20.115 23:51:54 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.115 23:51:54 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.115 23:51:54 event -- scripts/common.sh@365 -- # decimal 1 00:10:20.115 23:51:54 event -- scripts/common.sh@353 -- # local d=1 00:10:20.115 23:51:54 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.115 23:51:54 event -- scripts/common.sh@355 -- # echo 1 00:10:20.115 23:51:54 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.115 23:51:54 event -- scripts/common.sh@366 -- # decimal 2 00:10:20.115 23:51:54 event -- scripts/common.sh@353 -- # local d=2 00:10:20.115 23:51:54 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.115 23:51:54 event -- scripts/common.sh@355 -- # echo 2 00:10:20.115 23:51:54 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.115 23:51:54 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.115 23:51:54 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.115 23:51:54 event -- scripts/common.sh@368 -- # return 0 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:20.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.115 --rc genhtml_branch_coverage=1 00:10:20.115 --rc genhtml_function_coverage=1 00:10:20.115 --rc genhtml_legend=1 00:10:20.115 --rc geninfo_all_blocks=1 00:10:20.115 --rc geninfo_unexecuted_blocks=1 00:10:20.115 00:10:20.115 ' 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:20.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.115 --rc genhtml_branch_coverage=1 00:10:20.115 --rc genhtml_function_coverage=1 00:10:20.115 --rc genhtml_legend=1 00:10:20.115 --rc geninfo_all_blocks=1 00:10:20.115 --rc geninfo_unexecuted_blocks=1 00:10:20.115 00:10:20.115 ' 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:20.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.115 --rc genhtml_branch_coverage=1 00:10:20.115 --rc genhtml_function_coverage=1 00:10:20.115 --rc genhtml_legend=1 00:10:20.115 --rc geninfo_all_blocks=1 00:10:20.115 --rc geninfo_unexecuted_blocks=1 00:10:20.115 00:10:20.115 ' 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:20.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.115 --rc genhtml_branch_coverage=1 00:10:20.115 --rc genhtml_function_coverage=1 00:10:20.115 --rc genhtml_legend=1 00:10:20.115 --rc geninfo_all_blocks=1 00:10:20.115 --rc geninfo_unexecuted_blocks=1 00:10:20.115 00:10:20.115 ' 00:10:20.115 23:51:54 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/nbd_common.sh 00:10:20.115 23:51:54 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:20.115 23:51:54 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:20.115 23:51:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.116 23:51:54 event -- common/autotest_common.sh@10 -- # set +x 00:10:20.116 ************************************ 00:10:20.116 START TEST event_perf 00:10:20.116 ************************************ 00:10:20.116 23:51:55 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:20.116 Running I/O for 1 seconds...[2024-12-09 23:51:55.033449] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:20.116 [2024-12-09 23:51:55.033517] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180603 ] 00:10:20.375 [2024-12-09 23:51:55.113087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.375 [2024-12-09 23:51:55.156321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.375 [2024-12-09 23:51:55.156429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.375 [2024-12-09 23:51:55.156512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.375 [2024-12-09 23:51:55.156513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.311 Running I/O for 1 seconds... 00:10:21.311 lcore 0: 204758 00:10:21.311 lcore 1: 204757 00:10:21.311 lcore 2: 204757 00:10:21.311 lcore 3: 204757 00:10:21.311 done. 00:10:21.311 00:10:21.311 real 0m1.185s 00:10:21.311 user 0m4.094s 00:10:21.311 sys 0m0.086s 00:10:21.311 23:51:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.311 23:51:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:21.311 ************************************ 00:10:21.311 END TEST event_perf 00:10:21.311 ************************************ 00:10:21.311 23:51:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor/reactor -t 1 00:10:21.311 23:51:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:21.311 23:51:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.311 23:51:56 event -- common/autotest_common.sh@10 -- # set +x 00:10:21.571 ************************************ 00:10:21.571 START TEST event_reactor 00:10:21.571 ************************************ 00:10:21.571 23:51:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor/reactor -t 1 00:10:21.571 [2024-12-09 23:51:56.282556] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:21.571 [2024-12-09 23:51:56.282631] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180855 ] 00:10:21.571 [2024-12-09 23:51:56.358307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.571 [2024-12-09 23:51:56.397692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.510 test_start 00:10:22.510 oneshot 00:10:22.510 tick 100 00:10:22.510 tick 100 00:10:22.510 tick 250 00:10:22.510 tick 100 00:10:22.510 tick 100 00:10:22.510 tick 100 00:10:22.510 tick 250 00:10:22.510 tick 500 00:10:22.510 tick 100 00:10:22.510 tick 100 00:10:22.510 tick 250 00:10:22.510 tick 100 00:10:22.510 tick 100 00:10:22.510 test_end 00:10:22.510 00:10:22.510 real 0m1.169s 00:10:22.510 user 0m1.095s 00:10:22.510 sys 0m0.070s 00:10:22.510 23:51:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.510 23:51:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:22.510 ************************************ 00:10:22.510 END TEST event_reactor 00:10:22.510 ************************************ 00:10:22.769 23:51:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:22.769 23:51:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:22.769 23:51:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.769 23:51:57 event -- common/autotest_common.sh@10 -- # set +x 00:10:22.769 ************************************ 00:10:22.769 START TEST event_reactor_perf 00:10:22.769 ************************************ 00:10:22.769 23:51:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:22.769 [2024-12-09 23:51:57.524603] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:22.769 [2024-12-09 23:51:57.524674] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181101 ] 00:10:22.769 [2024-12-09 23:51:57.601733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.769 [2024-12-09 23:51:57.640139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.149 test_start 00:10:24.149 test_end 00:10:24.149 Performance: 500124 events per second 00:10:24.149 00:10:24.149 real 0m1.175s 00:10:24.149 user 0m1.092s 00:10:24.149 sys 0m0.079s 00:10:24.149 23:51:58 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.149 23:51:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:24.149 ************************************ 00:10:24.149 END TEST event_reactor_perf 00:10:24.149 ************************************ 00:10:24.149 23:51:58 event -- event/event.sh@49 -- # uname -s 00:10:24.149 23:51:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:24.149 23:51:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler.sh 00:10:24.149 23:51:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.149 23:51:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.149 23:51:58 event -- common/autotest_common.sh@10 -- # set +x 00:10:24.149 ************************************ 00:10:24.149 START TEST event_scheduler 00:10:24.149 ************************************ 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler.sh 00:10:24.149 * Looking for test storage... 00:10:24.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.149 23:51:58 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:24.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.149 --rc genhtml_branch_coverage=1 00:10:24.149 --rc genhtml_function_coverage=1 00:10:24.149 --rc genhtml_legend=1 00:10:24.149 --rc geninfo_all_blocks=1 00:10:24.149 --rc geninfo_unexecuted_blocks=1 00:10:24.149 00:10:24.149 ' 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:24.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.149 --rc genhtml_branch_coverage=1 00:10:24.149 --rc genhtml_function_coverage=1 00:10:24.149 --rc genhtml_legend=1 00:10:24.149 --rc geninfo_all_blocks=1 00:10:24.149 --rc geninfo_unexecuted_blocks=1 00:10:24.149 00:10:24.149 ' 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:24.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.149 --rc genhtml_branch_coverage=1 00:10:24.149 --rc genhtml_function_coverage=1 00:10:24.149 --rc genhtml_legend=1 00:10:24.149 --rc geninfo_all_blocks=1 00:10:24.149 --rc geninfo_unexecuted_blocks=1 00:10:24.149 00:10:24.149 ' 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:24.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.149 --rc genhtml_branch_coverage=1 00:10:24.149 --rc genhtml_function_coverage=1 00:10:24.149 --rc genhtml_legend=1 00:10:24.149 --rc geninfo_all_blocks=1 00:10:24.149 --rc geninfo_unexecuted_blocks=1 00:10:24.149 00:10:24.149 ' 00:10:24.149 23:51:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:24.149 23:51:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=181389 00:10:24.149 23:51:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:24.149 23:51:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:24.149 23:51:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 181389 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 181389 ']' 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.149 23:51:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:24.149 [2024-12-09 23:51:58.975402] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:24.149 [2024-12-09 23:51:58.975454] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181389 ] 00:10:24.149 [2024-12-09 23:51:59.049653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.409 [2024-12-09 23:51:59.094356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.409 [2024-12-09 23:51:59.094462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.409 [2024-12-09 23:51:59.094572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.409 [2024-12-09 23:51:59.094572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.409 23:51:59 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.409 23:51:59 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:24.409 23:51:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:24.409 23:51:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 [2024-12-09 23:51:59.143086] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:10:24.409 [2024-12-09 23:51:59.143104] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:24.409 [2024-12-09 23:51:59.143114] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:24.409 [2024-12-09 23:51:59.143120] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:24.409 [2024-12-09 23:51:59.143125] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:24.409 23:51:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.409 23:51:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:24.409 23:51:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 [2024-12-09 23:51:59.221912] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:24.409 23:51:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.409 23:51:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:24.409 23:51:59 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:24.409 23:51:59 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 ************************************ 00:10:24.409 START TEST scheduler_create_thread 00:10:24.409 ************************************ 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 2 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 3 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 4 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 5 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 6 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 7 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 8 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 9 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.409 10 00:10:24.409 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.668 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:24.668 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.668 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.668 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.668 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:24.668 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:24.668 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.668 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:24.926 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.926 23:51:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:24.926 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.926 23:51:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:26.831 23:52:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.831 23:52:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:26.831 23:52:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:26.831 23:52:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.831 23:52:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:27.766 23:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.766 00:10:27.766 real 0m3.102s 00:10:27.766 user 0m0.026s 00:10:27.766 sys 0m0.004s 00:10:27.766 23:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.766 23:52:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:27.766 ************************************ 00:10:27.766 END TEST scheduler_create_thread 00:10:27.766 ************************************ 00:10:27.766 23:52:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:27.766 23:52:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 181389 00:10:27.766 23:52:02 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 181389 ']' 00:10:27.766 23:52:02 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 181389 00:10:27.766 23:52:02 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:27.766 23:52:02 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.766 23:52:02 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 181389 00:10:27.766 23:52:02 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:27.766 23:52:02 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:27.766 23:52:02 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 181389' 00:10:27.766 killing process with pid 181389 00:10:27.766 23:52:02 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 181389 00:10:27.766 23:52:02 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 181389 00:10:28.025 [2024-12-09 23:52:02.737403] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:28.025 00:10:28.025 real 0m4.172s 00:10:28.025 user 0m6.671s 00:10:28.025 sys 0m0.385s 00:10:28.025 23:52:02 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.025 23:52:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:28.025 ************************************ 00:10:28.025 END TEST event_scheduler 00:10:28.025 ************************************ 00:10:28.025 23:52:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:28.284 23:52:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:28.284 23:52:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:28.284 23:52:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.284 23:52:02 event -- common/autotest_common.sh@10 -- # set +x 00:10:28.284 ************************************ 00:10:28.284 START TEST app_repeat 00:10:28.284 ************************************ 00:10:28.284 23:52:02 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=182131 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 182131' 00:10:28.284 Process app_repeat pid: 182131 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:28.284 spdk_app_start Round 0 00:10:28.284 23:52:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 182131 /var/tmp/spdk-nbd.sock 00:10:28.284 23:52:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 182131 ']' 00:10:28.284 23:52:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:28.284 23:52:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.284 23:52:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:28.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:28.284 23:52:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.284 23:52:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:28.284 [2024-12-09 23:52:03.033258] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:28.284 [2024-12-09 23:52:03.033308] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182131 ] 00:10:28.284 [2024-12-09 23:52:03.111489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:28.284 [2024-12-09 23:52:03.154619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.284 [2024-12-09 23:52:03.154621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.543 23:52:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.543 23:52:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:28.543 23:52:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:28.543 Malloc0 00:10:28.543 23:52:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:28.802 Malloc1 00:10:28.802 23:52:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:28.802 23:52:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:29.061 /dev/nbd0 00:10:29.061 23:52:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:29.061 23:52:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:29.061 1+0 records in 00:10:29.061 1+0 records out 00:10:29.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229324 s, 17.9 MB/s 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:29.061 23:52:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:29.061 23:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:29.061 23:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:29.061 23:52:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:29.321 /dev/nbd1 00:10:29.321 23:52:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:29.321 23:52:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:29.321 1+0 records in 00:10:29.321 1+0 records out 00:10:29.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212322 s, 19.3 MB/s 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:29.321 23:52:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:29.321 23:52:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:29.321 23:52:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:29.321 23:52:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:29.321 23:52:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.321 23:52:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:29.580 { 00:10:29.580 "nbd_device": "/dev/nbd0", 00:10:29.580 "bdev_name": "Malloc0" 00:10:29.580 }, 00:10:29.580 { 00:10:29.580 "nbd_device": "/dev/nbd1", 00:10:29.580 "bdev_name": "Malloc1" 00:10:29.580 } 00:10:29.580 ]' 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:29.580 { 00:10:29.580 "nbd_device": "/dev/nbd0", 00:10:29.580 "bdev_name": "Malloc0" 00:10:29.580 }, 00:10:29.580 { 00:10:29.580 "nbd_device": "/dev/nbd1", 00:10:29.580 "bdev_name": "Malloc1" 00:10:29.580 } 00:10:29.580 ]' 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:29.580 /dev/nbd1' 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:29.580 /dev/nbd1' 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:29.580 256+0 records in 00:10:29.580 256+0 records out 00:10:29.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101179 s, 104 MB/s 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:29.580 256+0 records in 00:10:29.580 256+0 records out 00:10:29.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139104 s, 75.4 MB/s 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:29.580 256+0 records in 00:10:29.580 256+0 records out 00:10:29.580 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014764 s, 71.0 MB/s 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:10:29.580 23:52:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:29.581 23:52:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.581 23:52:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:29.581 23:52:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:29.581 23:52:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:29.581 23:52:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.581 23:52:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:29.840 23:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:29.840 23:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:29.840 23:52:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:29.840 23:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.840 23:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.840 23:52:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:29.840 23:52:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:29.840 23:52:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.840 23:52:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.840 23:52:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:30.099 23:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:30.099 23:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:30.099 23:52:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:30.099 23:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:30.099 23:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:30.099 23:52:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:30.099 23:52:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:30.099 23:52:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:30.099 23:52:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:30.099 23:52:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.099 23:52:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:30.358 23:52:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:30.358 23:52:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:30.617 23:52:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:30.876 [2024-12-09 23:52:05.575795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:30.876 [2024-12-09 23:52:05.612824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.876 [2024-12-09 23:52:05.612826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.877 [2024-12-09 23:52:05.653331] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:30.877 [2024-12-09 23:52:05.653369] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:34.166 23:52:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:34.166 23:52:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:34.166 spdk_app_start Round 1 00:10:34.166 23:52:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 182131 /var/tmp/spdk-nbd.sock 00:10:34.166 23:52:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 182131 ']' 00:10:34.166 23:52:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:34.166 23:52:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.166 23:52:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:34.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:34.166 23:52:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.166 23:52:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:34.166 23:52:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.166 23:52:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:34.166 23:52:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:34.166 Malloc0 00:10:34.166 23:52:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:34.166 Malloc1 00:10:34.166 23:52:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:34.166 23:52:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:34.426 /dev/nbd0 00:10:34.426 23:52:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:34.426 23:52:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:34.426 1+0 records in 00:10:34.426 1+0 records out 00:10:34.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194704 s, 21.0 MB/s 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.426 23:52:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:34.426 23:52:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.426 23:52:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:34.426 23:52:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:34.685 /dev/nbd1 00:10:34.685 23:52:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:34.685 23:52:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:34.685 1+0 records in 00:10:34.685 1+0 records out 00:10:34.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193467 s, 21.2 MB/s 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.685 23:52:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:34.685 23:52:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.685 23:52:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:34.685 23:52:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:34.685 23:52:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.685 23:52:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:34.945 { 00:10:34.945 "nbd_device": "/dev/nbd0", 00:10:34.945 "bdev_name": "Malloc0" 00:10:34.945 }, 00:10:34.945 { 00:10:34.945 "nbd_device": "/dev/nbd1", 00:10:34.945 "bdev_name": "Malloc1" 00:10:34.945 } 00:10:34.945 ]' 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:34.945 { 00:10:34.945 "nbd_device": "/dev/nbd0", 00:10:34.945 "bdev_name": "Malloc0" 00:10:34.945 }, 00:10:34.945 { 00:10:34.945 "nbd_device": "/dev/nbd1", 00:10:34.945 "bdev_name": "Malloc1" 00:10:34.945 } 00:10:34.945 ]' 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:34.945 /dev/nbd1' 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:34.945 /dev/nbd1' 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:34.945 256+0 records in 00:10:34.945 256+0 records out 00:10:34.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106578 s, 98.4 MB/s 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:34.945 256+0 records in 00:10:34.945 256+0 records out 00:10:34.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138625 s, 75.6 MB/s 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:34.945 256+0 records in 00:10:34.945 256+0 records out 00:10:34.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0156964 s, 66.8 MB/s 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.945 23:52:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:35.205 23:52:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:35.205 23:52:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:35.205 23:52:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:35.205 23:52:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.205 23:52:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.205 23:52:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:35.205 23:52:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:35.205 23:52:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.205 23:52:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.205 23:52:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:35.465 23:52:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:35.465 23:52:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:35.465 23:52:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:35.465 23:52:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.465 23:52:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.465 23:52:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:35.465 23:52:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:35.465 23:52:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.465 23:52:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:35.465 23:52:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.465 23:52:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:35.724 23:52:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:35.724 23:52:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:35.984 23:52:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:35.984 [2024-12-09 23:52:10.911792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:36.244 [2024-12-09 23:52:10.949572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.244 [2024-12-09 23:52:10.949574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.244 [2024-12-09 23:52:10.991083] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:36.244 [2024-12-09 23:52:10.991121] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:39.534 23:52:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:39.534 23:52:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:39.534 spdk_app_start Round 2 00:10:39.534 23:52:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 182131 /var/tmp/spdk-nbd.sock 00:10:39.534 23:52:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 182131 ']' 00:10:39.534 23:52:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:39.534 23:52:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.534 23:52:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:39.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:39.534 23:52:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.534 23:52:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:39.534 23:52:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.534 23:52:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:39.534 23:52:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:39.534 Malloc0 00:10:39.534 23:52:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:39.534 Malloc1 00:10:39.534 23:52:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:39.534 23:52:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:39.793 /dev/nbd0 00:10:39.793 23:52:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:39.793 23:52:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:39.793 1+0 records in 00:10:39.793 1+0 records out 00:10:39.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188187 s, 21.8 MB/s 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:39.793 23:52:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:39.793 23:52:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:39.793 23:52:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:39.793 23:52:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:40.052 /dev/nbd1 00:10:40.052 23:52:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:40.052 23:52:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:40.052 1+0 records in 00:10:40.052 1+0 records out 00:10:40.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021026 s, 19.5 MB/s 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:40.052 23:52:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:40.052 23:52:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:40.052 23:52:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:40.052 23:52:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:40.052 23:52:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.052 23:52:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:40.310 23:52:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:40.311 { 00:10:40.311 "nbd_device": "/dev/nbd0", 00:10:40.311 "bdev_name": "Malloc0" 00:10:40.311 }, 00:10:40.311 { 00:10:40.311 "nbd_device": "/dev/nbd1", 00:10:40.311 "bdev_name": "Malloc1" 00:10:40.311 } 00:10:40.311 ]' 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:40.311 { 00:10:40.311 "nbd_device": "/dev/nbd0", 00:10:40.311 "bdev_name": "Malloc0" 00:10:40.311 }, 00:10:40.311 { 00:10:40.311 "nbd_device": "/dev/nbd1", 00:10:40.311 "bdev_name": "Malloc1" 00:10:40.311 } 00:10:40.311 ]' 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:40.311 /dev/nbd1' 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:40.311 /dev/nbd1' 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:40.311 256+0 records in 00:10:40.311 256+0 records out 00:10:40.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00997409 s, 105 MB/s 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:40.311 256+0 records in 00:10:40.311 256+0 records out 00:10:40.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148723 s, 70.5 MB/s 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:40.311 256+0 records in 00:10:40.311 256+0 records out 00:10:40.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153954 s, 68.1 MB/s 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.311 23:52:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:40.569 23:52:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:40.569 23:52:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:40.569 23:52:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:40.569 23:52:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.569 23:52:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.569 23:52:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:40.569 23:52:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:40.569 23:52:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.569 23:52:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.569 23:52:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:40.827 23:52:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:40.827 23:52:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:40.827 23:52:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:40.827 23:52:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.827 23:52:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.827 23:52:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:40.827 23:52:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:40.827 23:52:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.827 23:52:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:40.827 23:52:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.827 23:52:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:41.085 23:52:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:41.085 23:52:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:41.344 23:52:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:41.344 [2024-12-09 23:52:16.254644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:41.603 [2024-12-09 23:52:16.293711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.603 [2024-12-09 23:52:16.293712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.603 [2024-12-09 23:52:16.334727] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:41.603 [2024-12-09 23:52:16.334765] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:44.888 23:52:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 182131 /var/tmp/spdk-nbd.sock 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 182131 ']' 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:44.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:44.888 23:52:19 event.app_repeat -- event/event.sh@39 -- # killprocess 182131 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 182131 ']' 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 182131 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 182131 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 182131' 00:10:44.888 killing process with pid 182131 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@973 -- # kill 182131 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@978 -- # wait 182131 00:10:44.888 spdk_app_start is called in Round 0. 00:10:44.888 Shutdown signal received, stop current app iteration 00:10:44.888 Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 reinitialization... 00:10:44.888 spdk_app_start is called in Round 1. 00:10:44.888 Shutdown signal received, stop current app iteration 00:10:44.888 Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 reinitialization... 00:10:44.888 spdk_app_start is called in Round 2. 00:10:44.888 Shutdown signal received, stop current app iteration 00:10:44.888 Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 reinitialization... 00:10:44.888 spdk_app_start is called in Round 3. 00:10:44.888 Shutdown signal received, stop current app iteration 00:10:44.888 23:52:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:44.888 23:52:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:44.888 00:10:44.888 real 0m16.506s 00:10:44.888 user 0m36.357s 00:10:44.888 sys 0m2.514s 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.888 23:52:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:44.888 ************************************ 00:10:44.888 END TEST app_repeat 00:10:44.888 ************************************ 00:10:44.888 23:52:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:44.888 23:52:19 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/cpu_locks.sh 00:10:44.888 23:52:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:44.888 23:52:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.888 23:52:19 event -- common/autotest_common.sh@10 -- # set +x 00:10:44.888 ************************************ 00:10:44.888 START TEST cpu_locks 00:10:44.888 ************************************ 00:10:44.888 23:52:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/cpu_locks.sh 00:10:44.888 * Looking for test storage... 00:10:44.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event 00:10:44.888 23:52:19 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.888 23:52:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.888 23:52:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.888 23:52:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.888 23:52:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:44.888 23:52:19 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.889 23:52:19 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.889 --rc genhtml_branch_coverage=1 00:10:44.889 --rc genhtml_function_coverage=1 00:10:44.889 --rc genhtml_legend=1 00:10:44.889 --rc geninfo_all_blocks=1 00:10:44.889 --rc geninfo_unexecuted_blocks=1 00:10:44.889 00:10:44.889 ' 00:10:44.889 23:52:19 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.889 --rc genhtml_branch_coverage=1 00:10:44.889 --rc genhtml_function_coverage=1 00:10:44.889 --rc genhtml_legend=1 00:10:44.889 --rc geninfo_all_blocks=1 00:10:44.889 --rc geninfo_unexecuted_blocks=1 00:10:44.889 00:10:44.889 ' 00:10:44.889 23:52:19 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.889 --rc genhtml_branch_coverage=1 00:10:44.889 --rc genhtml_function_coverage=1 00:10:44.889 --rc genhtml_legend=1 00:10:44.889 --rc geninfo_all_blocks=1 00:10:44.889 --rc geninfo_unexecuted_blocks=1 00:10:44.889 00:10:44.889 ' 00:10:44.889 23:52:19 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.889 --rc genhtml_branch_coverage=1 00:10:44.889 --rc genhtml_function_coverage=1 00:10:44.889 --rc genhtml_legend=1 00:10:44.889 --rc geninfo_all_blocks=1 00:10:44.889 --rc geninfo_unexecuted_blocks=1 00:10:44.889 00:10:44.889 ' 00:10:44.889 23:52:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:44.889 23:52:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:44.889 23:52:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:44.889 23:52:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:44.889 23:52:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:44.889 23:52:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.889 23:52:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:44.889 ************************************ 00:10:44.889 START TEST default_locks 00:10:44.889 ************************************ 00:10:44.889 23:52:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:10:44.889 23:52:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=185130 00:10:44.889 23:52:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 185130 00:10:44.889 23:52:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:10:44.889 23:52:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 185130 ']' 00:10:44.889 23:52:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.889 23:52:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.889 23:52:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.889 23:52:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.889 23:52:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:45.148 [2024-12-09 23:52:19.835505] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:45.148 [2024-12-09 23:52:19.835545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185130 ] 00:10:45.148 [2024-12-09 23:52:19.912920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.148 [2024-12-09 23:52:19.954075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.408 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.408 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:10:45.408 23:52:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 185130 00:10:45.408 23:52:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 185130 00:10:45.408 23:52:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:45.667 lslocks: write error 00:10:45.667 23:52:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 185130 00:10:45.667 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 185130 ']' 00:10:45.667 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 185130 00:10:45.667 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:45.927 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.927 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 185130 00:10:45.927 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.927 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.927 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 185130' 00:10:45.927 killing process with pid 185130 00:10:45.927 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 185130 00:10:45.927 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 185130 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 185130 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 185130 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 185130 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 185130 ']' 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:46.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (185130) - No such process 00:10:46.189 ERROR: process (pid: 185130) is no longer running 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:46.189 00:10:46.189 real 0m1.180s 00:10:46.189 user 0m1.127s 00:10:46.189 sys 0m0.535s 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.189 23:52:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:46.189 ************************************ 00:10:46.189 END TEST default_locks 00:10:46.189 ************************************ 00:10:46.189 23:52:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:46.189 23:52:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.189 23:52:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.189 23:52:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:46.189 ************************************ 00:10:46.189 START TEST default_locks_via_rpc 00:10:46.189 ************************************ 00:10:46.189 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:46.189 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=185386 00:10:46.189 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 185386 00:10:46.189 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:10:46.189 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 185386 ']' 00:10:46.189 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.189 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.189 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.189 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.189 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.189 [2024-12-09 23:52:21.079625] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:46.189 [2024-12-09 23:52:21.079665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185386 ] 00:10:46.448 [2024-12-09 23:52:21.154061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.448 [2024-12-09 23:52:21.190491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 185386 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 185386 00:10:46.708 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:46.969 23:52:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 185386 00:10:46.969 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 185386 ']' 00:10:46.969 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 185386 00:10:46.969 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:46.969 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.969 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 185386 00:10:46.969 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.969 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.969 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 185386' 00:10:46.969 killing process with pid 185386 00:10:46.969 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 185386 00:10:46.969 23:52:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 185386 00:10:47.228 00:10:47.228 real 0m1.059s 00:10:47.228 user 0m1.003s 00:10:47.228 sys 0m0.485s 00:10:47.228 23:52:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.228 23:52:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.228 ************************************ 00:10:47.229 END TEST default_locks_via_rpc 00:10:47.229 ************************************ 00:10:47.229 23:52:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:47.229 23:52:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:47.229 23:52:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.229 23:52:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:47.229 ************************************ 00:10:47.229 START TEST non_locking_app_on_locked_coremask 00:10:47.229 ************************************ 00:10:47.229 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:47.229 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=185640 00:10:47.229 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 185640 /var/tmp/spdk.sock 00:10:47.229 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:10:47.229 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 185640 ']' 00:10:47.229 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.229 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.229 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.229 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.229 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:47.488 [2024-12-09 23:52:22.216335] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:47.488 [2024-12-09 23:52:22.216379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185640 ] 00:10:47.488 [2024-12-09 23:52:22.293201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.488 [2024-12-09 23:52:22.334337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.748 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.748 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:47.748 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=185654 00:10:47.748 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 185654 /var/tmp/spdk2.sock 00:10:47.748 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:47.748 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 185654 ']' 00:10:47.748 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:47.748 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.748 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:47.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:47.748 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.748 23:52:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:47.748 [2024-12-09 23:52:22.599914] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:47.748 [2024-12-09 23:52:22.599962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185654 ] 00:10:48.007 [2024-12-09 23:52:22.692734] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:48.008 [2024-12-09 23:52:22.692756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.008 [2024-12-09 23:52:22.776078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.576 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.576 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:48.576 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 185640 00:10:48.576 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 185640 00:10:48.576 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:49.145 lslocks: write error 00:10:49.145 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 185640 00:10:49.145 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 185640 ']' 00:10:49.145 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 185640 00:10:49.145 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:49.145 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.145 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 185640 00:10:49.145 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.145 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.145 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 185640' 00:10:49.145 killing process with pid 185640 00:10:49.145 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 185640 00:10:49.145 23:52:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 185640 00:10:49.714 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 185654 00:10:49.714 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 185654 ']' 00:10:49.714 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 185654 00:10:49.714 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:49.714 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.714 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 185654 00:10:49.714 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.714 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.714 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 185654' 00:10:49.714 killing process with pid 185654 00:10:49.714 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 185654 00:10:49.714 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 185654 00:10:50.282 00:10:50.282 real 0m2.750s 00:10:50.282 user 0m2.888s 00:10:50.282 sys 0m0.918s 00:10:50.282 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.282 23:52:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:50.282 ************************************ 00:10:50.282 END TEST non_locking_app_on_locked_coremask 00:10:50.282 ************************************ 00:10:50.282 23:52:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:50.282 23:52:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:50.282 23:52:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.282 23:52:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:50.282 ************************************ 00:10:50.282 START TEST locking_app_on_unlocked_coremask 00:10:50.282 ************************************ 00:10:50.282 23:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:50.282 23:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=186144 00:10:50.282 23:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 186144 /var/tmp/spdk.sock 00:10:50.282 23:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:50.282 23:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 186144 ']' 00:10:50.282 23:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.283 23:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.283 23:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.283 23:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.283 23:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:50.283 [2024-12-09 23:52:25.032455] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:50.283 [2024-12-09 23:52:25.032496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid186144 ] 00:10:50.283 [2024-12-09 23:52:25.105559] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:50.283 [2024-12-09 23:52:25.105581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.283 [2024-12-09 23:52:25.147007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.542 23:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.542 23:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:50.542 23:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=186150 00:10:50.542 23:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 186150 /var/tmp/spdk2.sock 00:10:50.543 23:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:50.543 23:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 186150 ']' 00:10:50.543 23:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:50.543 23:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.543 23:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:50.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:50.543 23:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.543 23:52:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:50.543 [2024-12-09 23:52:25.409743] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:50.543 [2024-12-09 23:52:25.409792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid186150 ] 00:10:50.802 [2024-12-09 23:52:25.501176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.802 [2024-12-09 23:52:25.581182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.371 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.371 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:51.371 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 186150 00:10:51.371 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 186150 00:10:51.371 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:51.939 lslocks: write error 00:10:51.939 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 186144 00:10:51.939 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 186144 ']' 00:10:51.939 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 186144 00:10:51.939 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:51.939 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.939 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186144 00:10:51.939 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.939 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.939 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186144' 00:10:51.939 killing process with pid 186144 00:10:51.939 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 186144 00:10:51.939 23:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 186144 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 186150 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 186150 ']' 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 186150 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186150 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186150' 00:10:52.882 killing process with pid 186150 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 186150 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 186150 00:10:52.882 00:10:52.882 real 0m2.828s 00:10:52.882 user 0m3.003s 00:10:52.882 sys 0m0.926s 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.882 23:52:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:52.882 ************************************ 00:10:52.882 END TEST locking_app_on_unlocked_coremask 00:10:52.882 ************************************ 00:10:53.142 23:52:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:53.142 23:52:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:53.142 23:52:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.142 23:52:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:53.142 ************************************ 00:10:53.142 START TEST locking_app_on_locked_coremask 00:10:53.142 ************************************ 00:10:53.142 23:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:53.142 23:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=186642 00:10:53.142 23:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 186642 /var/tmp/spdk.sock 00:10:53.142 23:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:10:53.142 23:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 186642 ']' 00:10:53.142 23:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.142 23:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.142 23:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.142 23:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.142 23:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:53.142 [2024-12-09 23:52:27.924716] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:53.142 [2024-12-09 23:52:27.924756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid186642 ] 00:10:53.142 [2024-12-09 23:52:27.997964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.142 [2024-12-09 23:52:28.037809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=186650 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 186650 /var/tmp/spdk2.sock 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 186650 /var/tmp/spdk2.sock 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 186650 /var/tmp/spdk2.sock 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 186650 ']' 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:53.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.402 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:53.402 [2024-12-09 23:52:28.309562] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:53.402 [2024-12-09 23:52:28.309611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid186650 ] 00:10:53.661 [2024-12-09 23:52:28.401652] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 186642 has claimed it. 00:10:53.661 [2024-12-09 23:52:28.401692] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:54.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (186650) - No such process 00:10:54.229 ERROR: process (pid: 186650) is no longer running 00:10:54.229 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.229 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:54.229 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:54.229 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:54.229 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:54.229 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:54.229 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 186642 00:10:54.229 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 186642 00:10:54.229 23:52:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:54.798 lslocks: write error 00:10:54.798 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 186642 00:10:54.798 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 186642 ']' 00:10:54.798 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 186642 00:10:54.798 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:54.798 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.798 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186642 00:10:54.798 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.798 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.798 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186642' 00:10:54.798 killing process with pid 186642 00:10:54.798 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 186642 00:10:54.798 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 186642 00:10:55.058 00:10:55.058 real 0m1.934s 00:10:55.058 user 0m2.071s 00:10:55.058 sys 0m0.662s 00:10:55.058 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.058 23:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:55.058 ************************************ 00:10:55.058 END TEST locking_app_on_locked_coremask 00:10:55.058 ************************************ 00:10:55.058 23:52:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:55.058 23:52:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:55.058 23:52:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.058 23:52:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:55.058 ************************************ 00:10:55.058 START TEST locking_overlapped_coremask 00:10:55.058 ************************************ 00:10:55.058 23:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:55.058 23:52:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=187044 00:10:55.058 23:52:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 187044 /var/tmp/spdk.sock 00:10:55.058 23:52:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x7 00:10:55.058 23:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 187044 ']' 00:10:55.058 23:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.058 23:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.058 23:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.058 23:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.058 23:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:55.058 [2024-12-09 23:52:29.926692] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:55.058 [2024-12-09 23:52:29.926735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187044 ] 00:10:55.317 [2024-12-09 23:52:30.001006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.317 [2024-12-09 23:52:30.048131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.317 [2024-12-09 23:52:30.048241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.317 [2024-12-09 23:52:30.048242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=187136 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 187136 /var/tmp/spdk2.sock 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 187136 /var/tmp/spdk2.sock 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 187136 /var/tmp/spdk2.sock 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 187136 ']' 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:55.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.577 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:55.577 [2024-12-09 23:52:30.331499] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:55.577 [2024-12-09 23:52:30.331548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187136 ] 00:10:55.577 [2024-12-09 23:52:30.425798] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 187044 has claimed it. 00:10:55.577 [2024-12-09 23:52:30.425839] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:56.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (187136) - No such process 00:10:56.144 ERROR: process (pid: 187136) is no longer running 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 187044 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 187044 ']' 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 187044 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.144 23:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 187044 00:10:56.144 23:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.144 23:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.144 23:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 187044' 00:10:56.144 killing process with pid 187044 00:10:56.144 23:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 187044 00:10:56.144 23:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 187044 00:10:56.403 00:10:56.403 real 0m1.455s 00:10:56.403 user 0m4.013s 00:10:56.403 sys 0m0.390s 00:10:56.403 23:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.403 23:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:56.403 ************************************ 00:10:56.403 END TEST locking_overlapped_coremask 00:10:56.403 ************************************ 00:10:56.663 23:52:31 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:56.663 23:52:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.663 23:52:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.663 23:52:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:56.663 ************************************ 00:10:56.663 START TEST locking_overlapped_coremask_via_rpc 00:10:56.663 ************************************ 00:10:56.663 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:56.663 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=187394 00:10:56.663 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 187394 /var/tmp/spdk.sock 00:10:56.663 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:56.663 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 187394 ']' 00:10:56.663 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.663 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.663 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.663 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.663 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.663 [2024-12-09 23:52:31.453207] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:56.663 [2024-12-09 23:52:31.453252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187394 ] 00:10:56.663 [2024-12-09 23:52:31.525454] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:56.663 [2024-12-09 23:52:31.525482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:56.663 [2024-12-09 23:52:31.565756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.663 [2024-12-09 23:52:31.565792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.663 [2024-12-09 23:52:31.565793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.922 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.922 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:56.922 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=187405 00:10:56.922 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 187405 /var/tmp/spdk2.sock 00:10:56.922 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:56.922 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 187405 ']' 00:10:56.922 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:56.922 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.922 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:56.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:56.922 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.922 23:52:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.922 [2024-12-09 23:52:31.841502] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:56.922 [2024-12-09 23:52:31.841546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187405 ] 00:10:57.180 [2024-12-09 23:52:31.934194] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:57.180 [2024-12-09 23:52:31.934223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:57.180 [2024-12-09 23:52:32.020568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.180 [2024-12-09 23:52:32.020684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.180 [2024-12-09 23:52:32.020685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.772 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.030 [2024-12-09 23:52:32.714239] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 187394 has claimed it. 00:10:58.030 request: 00:10:58.030 { 00:10:58.030 "method": "framework_enable_cpumask_locks", 00:10:58.030 "req_id": 1 00:10:58.030 } 00:10:58.030 Got JSON-RPC error response 00:10:58.030 response: 00:10:58.030 { 00:10:58.030 "code": -32603, 00:10:58.030 "message": "Failed to claim CPU core: 2" 00:10:58.030 } 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 187394 /var/tmp/spdk.sock 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 187394 ']' 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 187405 /var/tmp/spdk2.sock 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 187405 ']' 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:58.030 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.031 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:58.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:58.031 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.031 23:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.289 23:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.289 23:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:58.289 23:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:58.289 23:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:58.289 23:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:58.289 23:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:58.289 00:10:58.289 real 0m1.723s 00:10:58.289 user 0m0.831s 00:10:58.289 sys 0m0.133s 00:10:58.289 23:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.289 23:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.289 ************************************ 00:10:58.289 END TEST locking_overlapped_coremask_via_rpc 00:10:58.289 ************************************ 00:10:58.289 23:52:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:58.289 23:52:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 187394 ]] 00:10:58.289 23:52:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 187394 00:10:58.289 23:52:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 187394 ']' 00:10:58.289 23:52:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 187394 00:10:58.289 23:52:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:58.289 23:52:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.289 23:52:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 187394 00:10:58.289 23:52:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.289 23:52:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.289 23:52:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 187394' 00:10:58.289 killing process with pid 187394 00:10:58.289 23:52:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 187394 00:10:58.289 23:52:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 187394 00:10:58.856 23:52:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 187405 ]] 00:10:58.857 23:52:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 187405 00:10:58.857 23:52:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 187405 ']' 00:10:58.857 23:52:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 187405 00:10:58.857 23:52:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:58.857 23:52:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.857 23:52:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 187405 00:10:58.857 23:52:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:58.857 23:52:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:58.857 23:52:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 187405' 00:10:58.857 killing process with pid 187405 00:10:58.857 23:52:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 187405 00:10:58.857 23:52:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 187405 00:10:59.117 23:52:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:59.117 23:52:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:59.117 23:52:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 187394 ]] 00:10:59.117 23:52:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 187394 00:10:59.117 23:52:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 187394 ']' 00:10:59.117 23:52:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 187394 00:10:59.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (187394) - No such process 00:10:59.117 23:52:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 187394 is not found' 00:10:59.117 Process with pid 187394 is not found 00:10:59.117 23:52:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 187405 ]] 00:10:59.117 23:52:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 187405 00:10:59.117 23:52:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 187405 ']' 00:10:59.117 23:52:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 187405 00:10:59.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (187405) - No such process 00:10:59.117 23:52:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 187405 is not found' 00:10:59.117 Process with pid 187405 is not found 00:10:59.117 23:52:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:59.117 00:10:59.117 real 0m14.325s 00:10:59.117 user 0m24.760s 00:10:59.117 sys 0m5.014s 00:10:59.117 23:52:33 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.117 23:52:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:59.117 ************************************ 00:10:59.117 END TEST cpu_locks 00:10:59.117 ************************************ 00:10:59.117 00:10:59.117 real 0m39.122s 00:10:59.117 user 1m14.317s 00:10:59.117 sys 0m8.529s 00:10:59.117 23:52:33 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.117 23:52:33 event -- common/autotest_common.sh@10 -- # set +x 00:10:59.117 ************************************ 00:10:59.117 END TEST event 00:10:59.117 ************************************ 00:10:59.117 23:52:33 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/thread.sh 00:10:59.117 23:52:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:59.117 23:52:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.117 23:52:33 -- common/autotest_common.sh@10 -- # set +x 00:10:59.117 ************************************ 00:10:59.117 START TEST thread 00:10:59.117 ************************************ 00:10:59.117 23:52:33 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/thread.sh 00:10:59.377 * Looking for test storage... 00:10:59.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:59.377 23:52:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.377 23:52:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.377 23:52:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.377 23:52:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.377 23:52:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.377 23:52:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.377 23:52:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.377 23:52:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.377 23:52:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.377 23:52:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.377 23:52:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.377 23:52:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:59.377 23:52:34 thread -- scripts/common.sh@345 -- # : 1 00:10:59.377 23:52:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.377 23:52:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.377 23:52:34 thread -- scripts/common.sh@365 -- # decimal 1 00:10:59.377 23:52:34 thread -- scripts/common.sh@353 -- # local d=1 00:10:59.377 23:52:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.377 23:52:34 thread -- scripts/common.sh@355 -- # echo 1 00:10:59.377 23:52:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.377 23:52:34 thread -- scripts/common.sh@366 -- # decimal 2 00:10:59.377 23:52:34 thread -- scripts/common.sh@353 -- # local d=2 00:10:59.377 23:52:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.377 23:52:34 thread -- scripts/common.sh@355 -- # echo 2 00:10:59.377 23:52:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.377 23:52:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.377 23:52:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.377 23:52:34 thread -- scripts/common.sh@368 -- # return 0 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:59.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.377 --rc genhtml_branch_coverage=1 00:10:59.377 --rc genhtml_function_coverage=1 00:10:59.377 --rc genhtml_legend=1 00:10:59.377 --rc geninfo_all_blocks=1 00:10:59.377 --rc geninfo_unexecuted_blocks=1 00:10:59.377 00:10:59.377 ' 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:59.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.377 --rc genhtml_branch_coverage=1 00:10:59.377 --rc genhtml_function_coverage=1 00:10:59.377 --rc genhtml_legend=1 00:10:59.377 --rc geninfo_all_blocks=1 00:10:59.377 --rc geninfo_unexecuted_blocks=1 00:10:59.377 00:10:59.377 ' 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:59.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.377 --rc genhtml_branch_coverage=1 00:10:59.377 --rc genhtml_function_coverage=1 00:10:59.377 --rc genhtml_legend=1 00:10:59.377 --rc geninfo_all_blocks=1 00:10:59.377 --rc geninfo_unexecuted_blocks=1 00:10:59.377 00:10:59.377 ' 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:59.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.377 --rc genhtml_branch_coverage=1 00:10:59.377 --rc genhtml_function_coverage=1 00:10:59.377 --rc genhtml_legend=1 00:10:59.377 --rc geninfo_all_blocks=1 00:10:59.377 --rc geninfo_unexecuted_blocks=1 00:10:59.377 00:10:59.377 ' 00:10:59.377 23:52:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.377 23:52:34 thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.377 ************************************ 00:10:59.377 START TEST thread_poller_perf 00:10:59.377 ************************************ 00:10:59.377 23:52:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:59.377 [2024-12-09 23:52:34.230061] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:10:59.377 [2024-12-09 23:52:34.230130] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187966 ] 00:10:59.377 [2024-12-09 23:52:34.306792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.636 [2024-12-09 23:52:34.348676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.636 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:00.574 [2024-12-09T22:52:35.510Z] ====================================== 00:11:00.574 [2024-12-09T22:52:35.510Z] busy:2305935702 (cyc) 00:11:00.574 [2024-12-09T22:52:35.510Z] total_run_count: 408000 00:11:00.574 [2024-12-09T22:52:35.510Z] tsc_hz: 2300000000 (cyc) 00:11:00.574 [2024-12-09T22:52:35.510Z] ====================================== 00:11:00.574 [2024-12-09T22:52:35.510Z] poller_cost: 5651 (cyc), 2456 (nsec) 00:11:00.574 00:11:00.574 real 0m1.186s 00:11:00.574 user 0m1.102s 00:11:00.574 sys 0m0.080s 00:11:00.574 23:52:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.574 23:52:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:00.574 ************************************ 00:11:00.574 END TEST thread_poller_perf 00:11:00.574 ************************************ 00:11:00.574 23:52:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:00.574 23:52:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:00.574 23:52:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.574 23:52:35 thread -- common/autotest_common.sh@10 -- # set +x 00:11:00.574 ************************************ 00:11:00.574 START TEST thread_poller_perf 00:11:00.574 ************************************ 00:11:00.574 23:52:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:00.574 [2024-12-09 23:52:35.484686] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:11:00.574 [2024-12-09 23:52:35.484754] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188165 ] 00:11:00.833 [2024-12-09 23:52:35.564216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.833 [2024-12-09 23:52:35.603463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.833 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:01.771 [2024-12-09T22:52:36.707Z] ====================================== 00:11:01.771 [2024-12-09T22:52:36.707Z] busy:2301443508 (cyc) 00:11:01.771 [2024-12-09T22:52:36.707Z] total_run_count: 5100000 00:11:01.771 [2024-12-09T22:52:36.707Z] tsc_hz: 2300000000 (cyc) 00:11:01.771 [2024-12-09T22:52:36.707Z] ====================================== 00:11:01.771 [2024-12-09T22:52:36.707Z] poller_cost: 451 (cyc), 196 (nsec) 00:11:01.771 00:11:01.771 real 0m1.178s 00:11:01.771 user 0m1.094s 00:11:01.771 sys 0m0.080s 00:11:01.771 23:52:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.771 23:52:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:01.771 ************************************ 00:11:01.771 END TEST thread_poller_perf 00:11:01.771 ************************************ 00:11:01.771 23:52:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:01.771 00:11:01.771 real 0m2.677s 00:11:01.771 user 0m2.352s 00:11:01.771 sys 0m0.342s 00:11:01.771 23:52:36 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.771 23:52:36 thread -- common/autotest_common.sh@10 -- # set +x 00:11:01.771 ************************************ 00:11:01.771 END TEST thread 00:11:01.771 ************************************ 00:11:02.032 23:52:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:02.032 23:52:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/cmdline.sh 00:11:02.032 23:52:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.032 23:52:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.032 23:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:02.032 ************************************ 00:11:02.032 START TEST app_cmdline 00:11:02.032 ************************************ 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/cmdline.sh 00:11:02.032 * Looking for test storage... 00:11:02.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.032 23:52:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:02.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.032 --rc genhtml_branch_coverage=1 00:11:02.032 --rc genhtml_function_coverage=1 00:11:02.032 --rc genhtml_legend=1 00:11:02.032 --rc geninfo_all_blocks=1 00:11:02.032 --rc geninfo_unexecuted_blocks=1 00:11:02.032 00:11:02.032 ' 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:02.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.032 --rc genhtml_branch_coverage=1 00:11:02.032 --rc genhtml_function_coverage=1 00:11:02.032 --rc genhtml_legend=1 00:11:02.032 --rc geninfo_all_blocks=1 00:11:02.032 --rc geninfo_unexecuted_blocks=1 00:11:02.032 00:11:02.032 ' 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:02.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.032 --rc genhtml_branch_coverage=1 00:11:02.032 --rc genhtml_function_coverage=1 00:11:02.032 --rc genhtml_legend=1 00:11:02.032 --rc geninfo_all_blocks=1 00:11:02.032 --rc geninfo_unexecuted_blocks=1 00:11:02.032 00:11:02.032 ' 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:02.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.032 --rc genhtml_branch_coverage=1 00:11:02.032 --rc genhtml_function_coverage=1 00:11:02.032 --rc genhtml_legend=1 00:11:02.032 --rc geninfo_all_blocks=1 00:11:02.032 --rc geninfo_unexecuted_blocks=1 00:11:02.032 00:11:02.032 ' 00:11:02.032 23:52:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:02.032 23:52:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=188509 00:11:02.032 23:52:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:02.032 23:52:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 188509 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 188509 ']' 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.032 23:52:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:02.292 [2024-12-09 23:52:36.971866] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:11:02.292 [2024-12-09 23:52:36.971916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188509 ] 00:11:02.292 [2024-12-09 23:52:37.044442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.292 [2024-12-09 23:52:37.083714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.552 23:52:37 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.552 23:52:37 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:02.552 23:52:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py spdk_get_version 00:11:02.552 { 00:11:02.552 "version": "SPDK v25.01-pre git sha1 b6a18b192", 00:11:02.552 "fields": { 00:11:02.552 "major": 25, 00:11:02.552 "minor": 1, 00:11:02.552 "patch": 0, 00:11:02.552 "suffix": "-pre", 00:11:02.552 "commit": "b6a18b192" 00:11:02.552 } 00:11:02.552 } 00:11:02.812 23:52:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:02.812 23:52:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:02.812 23:52:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:02.812 23:52:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:02.812 23:52:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:02.812 23:52:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:02.812 23:52:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.812 23:52:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:02.812 23:52:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:02.812 23:52:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:02.812 request: 00:11:02.812 { 00:11:02.812 "method": "env_dpdk_get_mem_stats", 00:11:02.812 "req_id": 1 00:11:02.812 } 00:11:02.812 Got JSON-RPC error response 00:11:02.812 response: 00:11:02.812 { 00:11:02.812 "code": -32601, 00:11:02.812 "message": "Method not found" 00:11:02.812 } 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.812 23:52:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 188509 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 188509 ']' 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 188509 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.812 23:52:37 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 188509 00:11:03.072 23:52:37 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.072 23:52:37 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.072 23:52:37 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 188509' 00:11:03.072 killing process with pid 188509 00:11:03.072 23:52:37 app_cmdline -- common/autotest_common.sh@973 -- # kill 188509 00:11:03.072 23:52:37 app_cmdline -- common/autotest_common.sh@978 -- # wait 188509 00:11:03.332 00:11:03.332 real 0m1.329s 00:11:03.332 user 0m1.557s 00:11:03.332 sys 0m0.443s 00:11:03.332 23:52:38 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.332 23:52:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:03.332 ************************************ 00:11:03.332 END TEST app_cmdline 00:11:03.332 ************************************ 00:11:03.332 23:52:38 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/version.sh 00:11:03.332 23:52:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:03.332 23:52:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.332 23:52:38 -- common/autotest_common.sh@10 -- # set +x 00:11:03.332 ************************************ 00:11:03.332 START TEST version 00:11:03.332 ************************************ 00:11:03.332 23:52:38 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/version.sh 00:11:03.332 * Looking for test storage... 00:11:03.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:11:03.332 23:52:38 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:03.332 23:52:38 version -- common/autotest_common.sh@1711 -- # lcov --version 00:11:03.332 23:52:38 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.592 23:52:38 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.592 23:52:38 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.592 23:52:38 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.592 23:52:38 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.592 23:52:38 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.592 23:52:38 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.592 23:52:38 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.592 23:52:38 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.592 23:52:38 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.592 23:52:38 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.592 23:52:38 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.592 23:52:38 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.592 23:52:38 version -- scripts/common.sh@344 -- # case "$op" in 00:11:03.592 23:52:38 version -- scripts/common.sh@345 -- # : 1 00:11:03.592 23:52:38 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.592 23:52:38 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.592 23:52:38 version -- scripts/common.sh@365 -- # decimal 1 00:11:03.592 23:52:38 version -- scripts/common.sh@353 -- # local d=1 00:11:03.592 23:52:38 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.592 23:52:38 version -- scripts/common.sh@355 -- # echo 1 00:11:03.592 23:52:38 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.592 23:52:38 version -- scripts/common.sh@366 -- # decimal 2 00:11:03.592 23:52:38 version -- scripts/common.sh@353 -- # local d=2 00:11:03.592 23:52:38 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.592 23:52:38 version -- scripts/common.sh@355 -- # echo 2 00:11:03.592 23:52:38 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.592 23:52:38 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.592 23:52:38 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.592 23:52:38 version -- scripts/common.sh@368 -- # return 0 00:11:03.593 23:52:38 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.593 23:52:38 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.593 --rc genhtml_branch_coverage=1 00:11:03.593 --rc genhtml_function_coverage=1 00:11:03.593 --rc genhtml_legend=1 00:11:03.593 --rc geninfo_all_blocks=1 00:11:03.593 --rc geninfo_unexecuted_blocks=1 00:11:03.593 00:11:03.593 ' 00:11:03.593 23:52:38 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.593 --rc genhtml_branch_coverage=1 00:11:03.593 --rc genhtml_function_coverage=1 00:11:03.593 --rc genhtml_legend=1 00:11:03.593 --rc geninfo_all_blocks=1 00:11:03.593 --rc geninfo_unexecuted_blocks=1 00:11:03.593 00:11:03.593 ' 00:11:03.593 23:52:38 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.593 --rc genhtml_branch_coverage=1 00:11:03.593 --rc genhtml_function_coverage=1 00:11:03.593 --rc genhtml_legend=1 00:11:03.593 --rc geninfo_all_blocks=1 00:11:03.593 --rc geninfo_unexecuted_blocks=1 00:11:03.593 00:11:03.593 ' 00:11:03.593 23:52:38 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.593 --rc genhtml_branch_coverage=1 00:11:03.593 --rc genhtml_function_coverage=1 00:11:03.593 --rc genhtml_legend=1 00:11:03.593 --rc geninfo_all_blocks=1 00:11:03.593 --rc geninfo_unexecuted_blocks=1 00:11:03.593 00:11:03.593 ' 00:11:03.593 23:52:38 version -- app/version.sh@17 -- # get_header_version major 00:11:03.593 23:52:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:11:03.593 23:52:38 version -- app/version.sh@14 -- # cut -f2 00:11:03.593 23:52:38 version -- app/version.sh@14 -- # tr -d '"' 00:11:03.593 23:52:38 version -- app/version.sh@17 -- # major=25 00:11:03.593 23:52:38 version -- app/version.sh@18 -- # get_header_version minor 00:11:03.593 23:52:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:11:03.593 23:52:38 version -- app/version.sh@14 -- # cut -f2 00:11:03.593 23:52:38 version -- app/version.sh@14 -- # tr -d '"' 00:11:03.593 23:52:38 version -- app/version.sh@18 -- # minor=1 00:11:03.593 23:52:38 version -- app/version.sh@19 -- # get_header_version patch 00:11:03.593 23:52:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:11:03.593 23:52:38 version -- app/version.sh@14 -- # cut -f2 00:11:03.593 23:52:38 version -- app/version.sh@14 -- # tr -d '"' 00:11:03.593 23:52:38 version -- app/version.sh@19 -- # patch=0 00:11:03.593 23:52:38 version -- app/version.sh@20 -- # get_header_version suffix 00:11:03.593 23:52:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:11:03.593 23:52:38 version -- app/version.sh@14 -- # cut -f2 00:11:03.593 23:52:38 version -- app/version.sh@14 -- # tr -d '"' 00:11:03.593 23:52:38 version -- app/version.sh@20 -- # suffix=-pre 00:11:03.593 23:52:38 version -- app/version.sh@22 -- # version=25.1 00:11:03.593 23:52:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:03.593 23:52:38 version -- app/version.sh@28 -- # version=25.1rc0 00:11:03.593 23:52:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:11:03.593 23:52:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:03.593 23:52:38 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:03.593 23:52:38 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:03.593 00:11:03.593 real 0m0.247s 00:11:03.593 user 0m0.155s 00:11:03.593 sys 0m0.136s 00:11:03.593 23:52:38 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.593 23:52:38 version -- common/autotest_common.sh@10 -- # set +x 00:11:03.593 ************************************ 00:11:03.593 END TEST version 00:11:03.593 ************************************ 00:11:03.593 23:52:38 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:03.593 23:52:38 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:03.593 23:52:38 -- spdk/autotest.sh@194 -- # uname -s 00:11:03.593 23:52:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:03.593 23:52:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:03.593 23:52:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:03.593 23:52:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:03.593 23:52:38 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:11:03.593 23:52:38 -- spdk/autotest.sh@260 -- # timing_exit lib 00:11:03.593 23:52:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.593 23:52:38 -- common/autotest_common.sh@10 -- # set +x 00:11:03.593 23:52:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:11:03.593 23:52:38 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:11:03.593 23:52:38 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:11:03.593 23:52:38 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:11:03.593 23:52:38 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:11:03.593 23:52:38 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:11:03.593 23:52:38 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:03.593 23:52:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.593 23:52:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.593 23:52:38 -- common/autotest_common.sh@10 -- # set +x 00:11:03.593 ************************************ 00:11:03.593 START TEST nvmf_tcp 00:11:03.593 ************************************ 00:11:03.593 23:52:38 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:03.853 * Looking for test storage... 00:11:03.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:11:03.853 23:52:38 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:03.853 23:52:38 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:03.853 23:52:38 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.853 23:52:38 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.853 23:52:38 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:11:03.853 23:52:38 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.853 23:52:38 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.853 --rc genhtml_branch_coverage=1 00:11:03.853 --rc genhtml_function_coverage=1 00:11:03.853 --rc genhtml_legend=1 00:11:03.853 --rc geninfo_all_blocks=1 00:11:03.853 --rc geninfo_unexecuted_blocks=1 00:11:03.853 00:11:03.853 ' 00:11:03.853 23:52:38 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.853 --rc genhtml_branch_coverage=1 00:11:03.853 --rc genhtml_function_coverage=1 00:11:03.853 --rc genhtml_legend=1 00:11:03.853 --rc geninfo_all_blocks=1 00:11:03.853 --rc geninfo_unexecuted_blocks=1 00:11:03.853 00:11:03.853 ' 00:11:03.853 23:52:38 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.853 --rc genhtml_branch_coverage=1 00:11:03.853 --rc genhtml_function_coverage=1 00:11:03.853 --rc genhtml_legend=1 00:11:03.853 --rc geninfo_all_blocks=1 00:11:03.853 --rc geninfo_unexecuted_blocks=1 00:11:03.853 00:11:03.853 ' 00:11:03.853 23:52:38 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.854 --rc genhtml_branch_coverage=1 00:11:03.854 --rc genhtml_function_coverage=1 00:11:03.854 --rc genhtml_legend=1 00:11:03.854 --rc geninfo_all_blocks=1 00:11:03.854 --rc geninfo_unexecuted_blocks=1 00:11:03.854 00:11:03.854 ' 00:11:03.854 23:52:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:11:03.854 23:52:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:03.854 23:52:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:03.854 23:52:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.854 23:52:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.854 23:52:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:03.854 ************************************ 00:11:03.854 START TEST nvmf_target_core 00:11:03.854 ************************************ 00:11:03.854 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:04.114 * Looking for test storage... 00:11:04.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.114 --rc genhtml_branch_coverage=1 00:11:04.114 --rc genhtml_function_coverage=1 00:11:04.114 --rc genhtml_legend=1 00:11:04.114 --rc geninfo_all_blocks=1 00:11:04.114 --rc geninfo_unexecuted_blocks=1 00:11:04.114 00:11:04.114 ' 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.114 --rc genhtml_branch_coverage=1 00:11:04.114 --rc genhtml_function_coverage=1 00:11:04.114 --rc genhtml_legend=1 00:11:04.114 --rc geninfo_all_blocks=1 00:11:04.114 --rc geninfo_unexecuted_blocks=1 00:11:04.114 00:11:04.114 ' 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.114 --rc genhtml_branch_coverage=1 00:11:04.114 --rc genhtml_function_coverage=1 00:11:04.114 --rc genhtml_legend=1 00:11:04.114 --rc geninfo_all_blocks=1 00:11:04.114 --rc geninfo_unexecuted_blocks=1 00:11:04.114 00:11:04.114 ' 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.114 --rc genhtml_branch_coverage=1 00:11:04.114 --rc genhtml_function_coverage=1 00:11:04.114 --rc genhtml_legend=1 00:11:04.114 --rc geninfo_all_blocks=1 00:11:04.114 --rc geninfo_unexecuted_blocks=1 00:11:04.114 00:11:04.114 ' 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:04.114 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.115 ************************************ 00:11:04.115 START TEST nvmf_abort 00:11:04.115 ************************************ 00:11:04.115 23:52:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:04.375 * Looking for test storage... 00:11:04.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:04.375 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.375 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.375 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.375 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.375 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.375 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.375 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.375 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.375 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.376 --rc genhtml_branch_coverage=1 00:11:04.376 --rc genhtml_function_coverage=1 00:11:04.376 --rc genhtml_legend=1 00:11:04.376 --rc geninfo_all_blocks=1 00:11:04.376 --rc geninfo_unexecuted_blocks=1 00:11:04.376 00:11:04.376 ' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.376 --rc genhtml_branch_coverage=1 00:11:04.376 --rc genhtml_function_coverage=1 00:11:04.376 --rc genhtml_legend=1 00:11:04.376 --rc geninfo_all_blocks=1 00:11:04.376 --rc geninfo_unexecuted_blocks=1 00:11:04.376 00:11:04.376 ' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.376 --rc genhtml_branch_coverage=1 00:11:04.376 --rc genhtml_function_coverage=1 00:11:04.376 --rc genhtml_legend=1 00:11:04.376 --rc geninfo_all_blocks=1 00:11:04.376 --rc geninfo_unexecuted_blocks=1 00:11:04.376 00:11:04.376 ' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.376 --rc genhtml_branch_coverage=1 00:11:04.376 --rc genhtml_function_coverage=1 00:11:04.376 --rc genhtml_legend=1 00:11:04.376 --rc geninfo_all_blocks=1 00:11:04.376 --rc geninfo_unexecuted_blocks=1 00:11:04.376 00:11:04.376 ' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.376 23:52:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.953 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.953 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.953 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.953 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.953 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.953 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.953 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:10.954 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:10.954 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:10.954 Found net devices under 0000:86:00.0: cvl_0_0 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:10.954 Found net devices under 0000:86:00.1: cvl_0_1 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.954 23:52:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.954 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.954 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.954 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:10.954 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.954 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.954 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.954 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:10.954 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:10.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:11:10.954 00:11:10.954 --- 10.0.0.2 ping statistics --- 00:11:10.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.954 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:11:10.954 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:11:10.955 00:11:10.955 --- 10.0.0.1 ping statistics --- 00:11:10.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.955 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=192146 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 192146 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 192146 ']' 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.955 [2024-12-09 23:52:45.348101] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:11:10.955 [2024-12-09 23:52:45.348141] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.955 [2024-12-09 23:52:45.423461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:10.955 [2024-12-09 23:52:45.464013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.955 [2024-12-09 23:52:45.464052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.955 [2024-12-09 23:52:45.464060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.955 [2024-12-09 23:52:45.464065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.955 [2024-12-09 23:52:45.464073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.955 [2024-12-09 23:52:45.465514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.955 [2024-12-09 23:52:45.465624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.955 [2024-12-09 23:52:45.465625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.955 [2024-12-09 23:52:45.614932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.955 Malloc0 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.955 Delay0 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.955 [2024-12-09 23:52:45.697942] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.955 23:52:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:10.955 [2024-12-09 23:52:45.830944] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:13.487 [2024-12-09 23:52:47.900772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1100ab0 is same with the state(6) to be set 00:11:13.487 Initializing NVMe Controllers 00:11:13.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:13.487 controller IO queue size 128 less than required 00:11:13.487 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:13.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:13.487 Initialization complete. Launching workers. 00:11:13.487 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 37783 00:11:13.487 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37847, failed to submit 62 00:11:13.487 success 37787, unsuccessful 60, failed 0 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.487 rmmod nvme_tcp 00:11:13.487 rmmod nvme_fabrics 00:11:13.487 rmmod nvme_keyring 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 192146 ']' 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 192146 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 192146 ']' 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 192146 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.487 23:52:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 192146 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 192146' 00:11:13.487 killing process with pid 192146 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 192146 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 192146 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.487 23:52:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.396 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.396 00:11:15.396 real 0m11.323s 00:11:15.396 user 0m11.869s 00:11:15.396 sys 0m5.262s 00:11:15.396 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.396 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:15.396 ************************************ 00:11:15.396 END TEST nvmf_abort 00:11:15.396 ************************************ 00:11:15.396 23:52:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:15.396 23:52:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.396 23:52:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.396 23:52:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:15.658 ************************************ 00:11:15.658 START TEST nvmf_ns_hotplug_stress 00:11:15.658 ************************************ 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:15.658 * Looking for test storage... 00:11:15.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:15.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.658 --rc genhtml_branch_coverage=1 00:11:15.658 --rc genhtml_function_coverage=1 00:11:15.658 --rc genhtml_legend=1 00:11:15.658 --rc geninfo_all_blocks=1 00:11:15.658 --rc geninfo_unexecuted_blocks=1 00:11:15.658 00:11:15.658 ' 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:15.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.658 --rc genhtml_branch_coverage=1 00:11:15.658 --rc genhtml_function_coverage=1 00:11:15.658 --rc genhtml_legend=1 00:11:15.658 --rc geninfo_all_blocks=1 00:11:15.658 --rc geninfo_unexecuted_blocks=1 00:11:15.658 00:11:15.658 ' 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:15.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.658 --rc genhtml_branch_coverage=1 00:11:15.658 --rc genhtml_function_coverage=1 00:11:15.658 --rc genhtml_legend=1 00:11:15.658 --rc geninfo_all_blocks=1 00:11:15.658 --rc geninfo_unexecuted_blocks=1 00:11:15.658 00:11:15.658 ' 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:15.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.658 --rc genhtml_branch_coverage=1 00:11:15.658 --rc genhtml_function_coverage=1 00:11:15.658 --rc genhtml_legend=1 00:11:15.658 --rc geninfo_all_blocks=1 00:11:15.658 --rc geninfo_unexecuted_blocks=1 00:11:15.658 00:11:15.658 ' 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.658 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:15.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.659 23:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.238 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.238 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:22.239 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:22.239 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:22.239 Found net devices under 0000:86:00.0: cvl_0_0 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:22.239 Found net devices under 0000:86:00.1: cvl_0_1 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:22.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:11:22.239 00:11:22.239 --- 10.0.0.2 ping statistics --- 00:11:22.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.239 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:11:22.239 00:11:22.239 --- 10.0.0.1 ping statistics --- 00:11:22.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.239 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:11:22.239 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=196206 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 196206 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 196206 ']' 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.240 [2024-12-09 23:52:56.594085] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:11:22.240 [2024-12-09 23:52:56.594132] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.240 [2024-12-09 23:52:56.672937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:22.240 [2024-12-09 23:52:56.714144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.240 [2024-12-09 23:52:56.714184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.240 [2024-12-09 23:52:56.714191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.240 [2024-12-09 23:52:56.714197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.240 [2024-12-09 23:52:56.714202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.240 [2024-12-09 23:52:56.715608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.240 [2024-12-09 23:52:56.715713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.240 [2024-12-09 23:52:56.715714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:22.240 23:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:22.240 [2024-12-09 23:52:57.016614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.240 23:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.498 23:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.499 [2024-12-09 23:52:57.410019] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.757 23:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:22.757 23:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:23.015 Malloc0 00:11:23.015 23:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:23.274 Delay0 00:11:23.274 23:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.532 23:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:23.532 NULL1 00:11:23.532 23:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:23.790 23:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=196491 00:11:23.790 23:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:23.790 23:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:23.790 23:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.049 23:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.308 23:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:24.308 23:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:24.566 true 00:11:24.566 23:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:24.566 23:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.824 23:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.824 23:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:24.824 23:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:25.083 true 00:11:25.083 23:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:25.083 23:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.342 23:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.601 23:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:25.601 23:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:25.860 true 00:11:25.860 23:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:25.860 23:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.118 23:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.118 23:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:26.118 23:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:26.376 true 00:11:26.376 23:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:26.376 23:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.635 23:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.893 23:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:26.894 23:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:27.152 true 00:11:27.152 23:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:27.152 23:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.410 23:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.410 23:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:27.410 23:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:27.668 true 00:11:27.668 23:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:27.668 23:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.926 23:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.184 23:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:28.184 23:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:28.443 true 00:11:28.443 23:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:28.443 23:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.701 23:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.701 23:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:28.701 23:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:28.959 true 00:11:28.959 23:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:28.959 23:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.218 23:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.476 23:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:29.476 23:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:29.734 true 00:11:29.734 23:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:29.734 23:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.992 23:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.992 23:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:29.992 23:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:30.250 true 00:11:30.250 23:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:30.250 23:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.508 23:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.767 23:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:30.767 23:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:31.026 true 00:11:31.026 23:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:31.026 23:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.284 23:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.542 23:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:31.542 23:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:31.542 true 00:11:31.542 23:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:31.542 23:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.801 23:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.059 23:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:32.059 23:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:32.318 true 00:11:32.318 23:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:32.318 23:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.576 23:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.835 23:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:32.835 23:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:32.835 true 00:11:32.835 23:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:32.835 23:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.092 23:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.350 23:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:33.350 23:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:33.607 true 00:11:33.608 23:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:33.608 23:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.865 23:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.124 23:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:34.124 23:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:34.124 true 00:11:34.124 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:34.124 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.382 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.640 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:34.640 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:34.898 true 00:11:34.898 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:34.898 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.156 23:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.414 23:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:35.414 23:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:35.414 true 00:11:35.414 23:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:35.414 23:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.672 23:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.929 23:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:35.929 23:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:36.187 true 00:11:36.187 23:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:36.187 23:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.445 23:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.703 23:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:36.703 23:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:36.703 true 00:11:36.703 23:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:36.703 23:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.961 23:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.218 23:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:37.219 23:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:37.478 true 00:11:37.478 23:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:37.478 23:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.738 23:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.006 23:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:38.006 23:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:38.006 true 00:11:38.006 23:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:38.006 23:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.263 23:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.520 23:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:38.520 23:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:38.778 true 00:11:38.778 23:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:38.778 23:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.035 23:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.293 23:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:39.293 23:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:39.293 true 00:11:39.550 23:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:39.550 23:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.550 23:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.808 23:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:39.808 23:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:40.067 true 00:11:40.067 23:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:40.067 23:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.324 23:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.582 23:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:40.582 23:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:40.582 true 00:11:40.841 23:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:40.841 23:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.841 23:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.098 23:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:41.098 23:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:41.356 true 00:11:41.356 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:41.356 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.613 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.871 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:41.871 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:42.130 true 00:11:42.130 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:42.130 23:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.130 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.387 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:42.387 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:42.645 true 00:11:42.645 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:42.645 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.904 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.162 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:43.162 23:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:43.420 true 00:11:43.420 23:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:43.420 23:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.678 23:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.678 23:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:43.678 23:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:43.936 true 00:11:43.936 23:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:43.936 23:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.194 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.452 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:44.452 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:44.710 true 00:11:44.710 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:44.710 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.968 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.968 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:44.968 23:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:45.225 true 00:11:45.225 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:45.225 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.483 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.741 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:45.741 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:45.999 true 00:11:45.999 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:45.999 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.257 23:53:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.514 23:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:46.514 23:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:46.514 true 00:11:46.514 23:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:46.514 23:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.772 23:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.030 23:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:47.030 23:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:47.287 true 00:11:47.287 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:47.287 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.550 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.807 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:47.807 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:47.807 true 00:11:47.807 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:47.807 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.065 23:53:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.323 23:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:48.323 23:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:48.581 true 00:11:48.581 23:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:48.581 23:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.838 23:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.095 23:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:49.095 23:53:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:49.095 true 00:11:49.095 23:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:49.095 23:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.353 23:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.611 23:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:49.611 23:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:49.869 true 00:11:49.869 23:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:49.869 23:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.127 23:53:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.383 23:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:50.383 23:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:50.383 true 00:11:50.641 23:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:50.641 23:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.641 23:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.899 23:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:50.899 23:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:51.156 true 00:11:51.156 23:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:51.156 23:53:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.414 23:53:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.673 23:53:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:51.673 23:53:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:51.673 true 00:11:51.673 23:53:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:51.673 23:53:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.930 23:53:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.188 23:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:52.188 23:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:52.445 true 00:11:52.445 23:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:52.445 23:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.703 23:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.961 23:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:52.961 23:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:52.961 true 00:11:53.219 23:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:53.219 23:53:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.219 23:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.476 23:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:53.476 23:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:53.733 true 00:11:53.734 23:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:53.734 23:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.992 23:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.992 Initializing NVMe Controllers 00:11:53.992 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:53.992 Controller IO queue size 128, less than required. 00:11:53.992 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:53.992 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:53.992 Initialization complete. Launching workers. 00:11:53.992 ======================================================== 00:11:53.992 Latency(us) 00:11:53.992 Device Information : IOPS MiB/s Average min max 00:11:53.992 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26767.30 13.07 4781.94 1620.80 8896.17 00:11:53.992 ======================================================== 00:11:53.992 Total : 26767.30 13.07 4781.94 1620.80 8896.17 00:11:53.992 00:11:54.250 23:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:54.250 23:53:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:54.250 true 00:11:54.250 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 196491 00:11:54.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (196491) - No such process 00:11:54.250 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 196491 00:11:54.250 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.508 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:54.766 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:54.766 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:54.766 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:54.766 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.766 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:55.025 null0 00:11:55.025 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.025 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.025 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:55.025 null1 00:11:55.284 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.284 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.284 23:53:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:55.284 null2 00:11:55.284 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.284 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.284 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:55.559 null3 00:11:55.559 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.559 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.559 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:55.817 null4 00:11:55.817 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.817 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.817 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:56.075 null5 00:11:56.075 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:56.075 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:56.075 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:56.075 null6 00:11:56.075 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:56.075 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:56.075 23:53:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:56.334 null7 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 202163 202164 202166 202168 202170 202172 202173 202175 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.334 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:56.593 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.593 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.593 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:56.593 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.593 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:56.593 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:56.593 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:56.593 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.851 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.110 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.110 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.110 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.110 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:57.110 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.110 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:57.110 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:57.110 23:53:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.110 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:57.368 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.626 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:57.885 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.885 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:57.885 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.885 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:57.885 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.885 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.885 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:57.885 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.143 23:53:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:58.143 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:58.143 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:58.144 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:58.144 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.402 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:58.661 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:58.661 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:58.661 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.661 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:58.661 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.661 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:58.661 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:58.661 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:58.920 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:59.179 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:59.179 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.179 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:59.179 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.179 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:59.179 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:59.179 23:53:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:59.179 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:59.437 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:59.438 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.438 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:59.438 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:59.438 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:59.438 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:59.697 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:59.955 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:59.955 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:59.955 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.955 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.955 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:59.955 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:59.955 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:59.955 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.955 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.955 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.214 23:53:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:00.214 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:00.214 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:00.214 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:00.214 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:00.214 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:00.214 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.472 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.472 rmmod nvme_tcp 00:12:00.472 rmmod nvme_fabrics 00:12:00.731 rmmod nvme_keyring 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 196206 ']' 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 196206 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 196206 ']' 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 196206 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 196206 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 196206' 00:12:00.731 killing process with pid 196206 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 196206 00:12:00.731 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 196206 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.991 23:53:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.899 23:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.899 00:12:02.899 real 0m47.394s 00:12:02.899 user 3m22.426s 00:12:02.899 sys 0m16.946s 00:12:02.899 23:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.899 23:53:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.899 ************************************ 00:12:02.899 END TEST nvmf_ns_hotplug_stress 00:12:02.899 ************************************ 00:12:02.899 23:53:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:02.899 23:53:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.899 23:53:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.899 23:53:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:03.161 ************************************ 00:12:03.161 START TEST nvmf_delete_subsystem 00:12:03.161 ************************************ 00:12:03.161 23:53:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:03.161 * Looking for test storage... 00:12:03.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:03.161 23:53:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:03.161 23:53:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:03.161 23:53:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:03.161 23:53:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:03.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.161 --rc genhtml_branch_coverage=1 00:12:03.161 --rc genhtml_function_coverage=1 00:12:03.161 --rc genhtml_legend=1 00:12:03.161 --rc geninfo_all_blocks=1 00:12:03.161 --rc geninfo_unexecuted_blocks=1 00:12:03.161 00:12:03.161 ' 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:03.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.161 --rc genhtml_branch_coverage=1 00:12:03.161 --rc genhtml_function_coverage=1 00:12:03.161 --rc genhtml_legend=1 00:12:03.161 --rc geninfo_all_blocks=1 00:12:03.161 --rc geninfo_unexecuted_blocks=1 00:12:03.161 00:12:03.161 ' 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:03.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.161 --rc genhtml_branch_coverage=1 00:12:03.161 --rc genhtml_function_coverage=1 00:12:03.161 --rc genhtml_legend=1 00:12:03.161 --rc geninfo_all_blocks=1 00:12:03.161 --rc geninfo_unexecuted_blocks=1 00:12:03.161 00:12:03.161 ' 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:03.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.161 --rc genhtml_branch_coverage=1 00:12:03.161 --rc genhtml_function_coverage=1 00:12:03.161 --rc genhtml_legend=1 00:12:03.161 --rc geninfo_all_blocks=1 00:12:03.161 --rc geninfo_unexecuted_blocks=1 00:12:03.161 00:12:03.161 ' 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:12:03.161 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.162 23:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:09.738 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:09.738 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:09.738 Found net devices under 0000:86:00.0: cvl_0_0 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:09.738 Found net devices under 0000:86:00.1: cvl_0_1 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.738 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.739 23:53:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:12:09.739 00:12:09.739 --- 10.0.0.2 ping statistics --- 00:12:09.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.739 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:12:09.739 00:12:09.739 --- 10.0.0.1 ping statistics --- 00:12:09.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.739 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=206562 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 206562 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 206562 ']' 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 [2024-12-09 23:53:44.190456] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:12:09.739 [2024-12-09 23:53:44.190506] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.739 [2024-12-09 23:53:44.269789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:09.739 [2024-12-09 23:53:44.310772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.739 [2024-12-09 23:53:44.310804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.739 [2024-12-09 23:53:44.310812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.739 [2024-12-09 23:53:44.310818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.739 [2024-12-09 23:53:44.310824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.739 [2024-12-09 23:53:44.311991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.739 [2024-12-09 23:53:44.311991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 [2024-12-09 23:53:44.460684] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 [2024-12-09 23:53:44.480888] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 NULL1 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 Delay0 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=206745 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:09.739 23:53:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:09.739 [2024-12-09 23:53:44.591833] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:11.641 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.641 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.641 23:53:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 starting I/O failed: -6 00:12:11.910 starting I/O failed: -6 00:12:11.910 starting I/O failed: -6 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 starting I/O failed: -6 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Write completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.910 Read completed with error (sct=0, sc=8) 00:12:11.911 starting I/O failed: -6 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 starting I/O failed: -6 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 starting I/O failed: -6 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 starting I/O failed: -6 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 starting I/O failed: -6 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 starting I/O failed: -6 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 starting I/O failed: -6 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 starting I/O failed: -6 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 [2024-12-09 23:53:46.711320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd60800d4d0 is same with the state(6) to be set 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Read completed with error (sct=0, sc=8) 00:12:11.911 Write completed with error (sct=0, sc=8) 00:12:12.846 [2024-12-09 23:53:47.685956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ab9b0 is same with the state(6) to be set 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 [2024-12-09 23:53:47.711239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa860 is same with the state(6) to be set 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Write completed with error (sct=0, sc=8) 00:12:12.846 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 [2024-12-09 23:53:47.711465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20aa4a0 is same with the state(6) to be set 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 [2024-12-09 23:53:47.713896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd60800d800 is same with the state(6) to be set 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Read completed with error (sct=0, sc=8) 00:12:12.847 Write completed with error (sct=0, sc=8) 00:12:12.847 [2024-12-09 23:53:47.714480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd60800d020 is same with the state(6) to be set 00:12:12.847 Initializing NVMe Controllers 00:12:12.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:12.847 Controller IO queue size 128, less than required. 00:12:12.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:12.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:12.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:12.847 Initialization complete. Launching workers. 00:12:12.847 ======================================================== 00:12:12.847 Latency(us) 00:12:12.847 Device Information : IOPS MiB/s Average min max 00:12:12.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.23 0.09 899663.90 359.87 1006597.14 00:12:12.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.84 0.08 914521.49 245.53 1009376.44 00:12:12.847 ======================================================== 00:12:12.847 Total : 349.06 0.17 906509.84 245.53 1009376.44 00:12:12.847 00:12:12.847 [2024-12-09 23:53:47.715018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ab9b0 (9): Bad file descriptor 00:12:12.847 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:12.847 23:53:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.847 23:53:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:12.847 23:53:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 206745 00:12:12.847 23:53:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 206745 00:12:13.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (206745) - No such process 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 206745 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 206745 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 206745 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.413 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.414 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.414 [2024-12-09 23:53:48.246810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.414 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.414 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.414 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.414 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:13.414 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.414 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=207275 00:12:13.414 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:13.414 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:13.414 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 207275 00:12:13.414 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:13.414 [2024-12-09 23:53:48.334034] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:13.980 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:13.980 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 207275 00:12:13.980 23:53:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:14.547 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:14.547 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 207275 00:12:14.547 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:15.113 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:15.113 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 207275 00:12:15.113 23:53:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:15.372 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:15.372 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 207275 00:12:15.372 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:15.938 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:15.938 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 207275 00:12:15.938 23:53:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:16.504 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:16.504 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 207275 00:12:16.504 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:16.762 Initializing NVMe Controllers 00:12:16.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:16.762 Controller IO queue size 128, less than required. 00:12:16.762 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:16.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:16.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:16.762 Initialization complete. Launching workers. 00:12:16.762 ======================================================== 00:12:16.762 Latency(us) 00:12:16.762 Device Information : IOPS MiB/s Average min max 00:12:16.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001890.42 1000120.30 1005604.10 00:12:16.762 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004165.08 1000143.64 1041213.78 00:12:16.762 ======================================================== 00:12:16.762 Total : 256.00 0.12 1003027.75 1000120.30 1041213.78 00:12:16.762 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 207275 00:12:17.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (207275) - No such process 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 207275 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.021 rmmod nvme_tcp 00:12:17.021 rmmod nvme_fabrics 00:12:17.021 rmmod nvme_keyring 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 206562 ']' 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 206562 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 206562 ']' 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 206562 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206562 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206562' 00:12:17.021 killing process with pid 206562 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 206562 00:12:17.021 23:53:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 206562 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.281 23:53:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.822 00:12:19.822 real 0m16.309s 00:12:19.822 user 0m29.411s 00:12:19.822 sys 0m5.391s 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.822 ************************************ 00:12:19.822 END TEST nvmf_delete_subsystem 00:12:19.822 ************************************ 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:19.822 ************************************ 00:12:19.822 START TEST nvmf_host_management 00:12:19.822 ************************************ 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:19.822 * Looking for test storage... 00:12:19.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.822 --rc genhtml_branch_coverage=1 00:12:19.822 --rc genhtml_function_coverage=1 00:12:19.822 --rc genhtml_legend=1 00:12:19.822 --rc geninfo_all_blocks=1 00:12:19.822 --rc geninfo_unexecuted_blocks=1 00:12:19.822 00:12:19.822 ' 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.822 --rc genhtml_branch_coverage=1 00:12:19.822 --rc genhtml_function_coverage=1 00:12:19.822 --rc genhtml_legend=1 00:12:19.822 --rc geninfo_all_blocks=1 00:12:19.822 --rc geninfo_unexecuted_blocks=1 00:12:19.822 00:12:19.822 ' 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.822 --rc genhtml_branch_coverage=1 00:12:19.822 --rc genhtml_function_coverage=1 00:12:19.822 --rc genhtml_legend=1 00:12:19.822 --rc geninfo_all_blocks=1 00:12:19.822 --rc geninfo_unexecuted_blocks=1 00:12:19.822 00:12:19.822 ' 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.822 --rc genhtml_branch_coverage=1 00:12:19.822 --rc genhtml_function_coverage=1 00:12:19.822 --rc genhtml_legend=1 00:12:19.822 --rc geninfo_all_blocks=1 00:12:19.822 --rc geninfo_unexecuted_blocks=1 00:12:19.822 00:12:19.822 ' 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.822 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.823 23:53:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.398 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:26.399 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:26.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:26.399 Found net devices under 0000:86:00.0: cvl_0_0 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:26.399 Found net devices under 0000:86:00.1: cvl_0_1 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:26.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:12:26.399 00:12:26.399 --- 10.0.0.2 ping statistics --- 00:12:26.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.399 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:12:26.399 00:12:26.399 --- 10.0.0.1 ping statistics --- 00:12:26.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.399 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.399 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=211534 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 211534 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 211534 ']' 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.400 [2024-12-09 23:54:00.467270] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:12:26.400 [2024-12-09 23:54:00.467314] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.400 [2024-12-09 23:54:00.537254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.400 [2024-12-09 23:54:00.581014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.400 [2024-12-09 23:54:00.581047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.400 [2024-12-09 23:54:00.581056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.400 [2024-12-09 23:54:00.581063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.400 [2024-12-09 23:54:00.581067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.400 [2024-12-09 23:54:00.582549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.400 [2024-12-09 23:54:00.582656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.400 [2024-12-09 23:54:00.582761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.400 [2024-12-09 23:54:00.582762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.400 [2024-12-09 23:54:00.728594] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.400 Malloc0 00:12:26.400 [2024-12-09 23:54:00.801028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=211606 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 211606 /var/tmp/bdevperf.sock 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 211606 ']' 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:26.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:26.400 { 00:12:26.400 "params": { 00:12:26.400 "name": "Nvme$subsystem", 00:12:26.400 "trtype": "$TEST_TRANSPORT", 00:12:26.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:26.400 "adrfam": "ipv4", 00:12:26.400 "trsvcid": "$NVMF_PORT", 00:12:26.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:26.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:26.400 "hdgst": ${hdgst:-false}, 00:12:26.400 "ddgst": ${ddgst:-false} 00:12:26.400 }, 00:12:26.400 "method": "bdev_nvme_attach_controller" 00:12:26.400 } 00:12:26.400 EOF 00:12:26.400 )") 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:26.400 23:54:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:26.400 "params": { 00:12:26.400 "name": "Nvme0", 00:12:26.400 "trtype": "tcp", 00:12:26.400 "traddr": "10.0.0.2", 00:12:26.400 "adrfam": "ipv4", 00:12:26.400 "trsvcid": "4420", 00:12:26.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:26.400 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:26.400 "hdgst": false, 00:12:26.400 "ddgst": false 00:12:26.400 }, 00:12:26.400 "method": "bdev_nvme_attach_controller" 00:12:26.400 }' 00:12:26.400 [2024-12-09 23:54:00.900564] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:12:26.400 [2024-12-09 23:54:00.900610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211606 ] 00:12:26.400 [2024-12-09 23:54:00.976977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.400 [2024-12-09 23:54:01.017461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.400 Running I/O for 10 seconds... 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.400 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.401 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=82 00:12:26.401 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 82 -ge 100 ']' 00:12:26.401 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=672 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 672 -ge 100 ']' 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.660 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.660 [2024-12-09 23:54:01.574206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.660 [2024-12-09 23:54:01.574249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.660 [2024-12-09 23:54:01.574267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.660 [2024-12-09 23:54:01.574282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.660 [2024-12-09 23:54:01.574296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1120 is same with the state(6) to be set 00:12:26.660 [2024-12-09 23:54:01.574667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.660 [2024-12-09 23:54:01.574683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.660 [2024-12-09 23:54:01.574706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.660 [2024-12-09 23:54:01.574722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.660 [2024-12-09 23:54:01.574737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.660 [2024-12-09 23:54:01.574752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.660 [2024-12-09 23:54:01.574779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.660 [2024-12-09 23:54:01.574795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.660 [2024-12-09 23:54:01.574810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.660 [2024-12-09 23:54:01.574818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.660 [2024-12-09 23:54:01.574825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.574834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.574841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.574850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.574857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.574865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.574871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.574880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.574886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.574894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.574901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.574909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.574919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.574927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.574934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.574942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.574949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.574957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.574964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.574974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.574981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.574989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.574995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.661 [2024-12-09 23:54:01.575441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.661 [2024-12-09 23:54:01.575447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:26.662 [2024-12-09 23:54:01.575675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.575683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ca060 is same with the state(6) to be set 00:12:26.662 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.662 [2024-12-09 23:54:01.576619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:12:26.662 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:26.662 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.662 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:26.662 task offset: 98304 on job bdev=Nvme0n1 fails 00:12:26.662 00:12:26.662 Latency(us) 00:12:26.662 [2024-12-09T22:54:01.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.662 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:26.662 Job: Nvme0n1 ended in about 0.40 seconds with error 00:12:26.662 Verification LBA range: start 0x0 length 0x400 00:12:26.662 Nvme0n1 : 0.40 1931.08 120.69 160.92 0.00 29747.68 1773.75 27240.18 00:12:26.662 [2024-12-09T22:54:01.598Z] =================================================================================================================== 00:12:26.662 [2024-12-09T22:54:01.598Z] Total : 1931.08 120.69 160.92 0.00 29747.68 1773.75 27240.18 00:12:26.662 [2024-12-09 23:54:01.579002] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:26.662 [2024-12-09 23:54:01.579022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1120 (9): Bad file descriptor 00:12:26.662 [2024-12-09 23:54:01.583375] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:12:26.662 [2024-12-09 23:54:01.583454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:26.662 [2024-12-09 23:54:01.583476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.662 [2024-12-09 23:54:01.583489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:12:26.662 [2024-12-09 23:54:01.583497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:12:26.662 [2024-12-09 23:54:01.583504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:12:26.662 [2024-12-09 23:54:01.583511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x16b1120 00:12:26.662 [2024-12-09 23:54:01.583529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1120 (9): Bad file descriptor 00:12:26.662 [2024-12-09 23:54:01.583540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:12:26.662 [2024-12-09 23:54:01.583547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:12:26.662 [2024-12-09 23:54:01.583556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:12:26.662 [2024-12-09 23:54:01.583564] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:12:26.662 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.662 23:54:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:28.036 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 211606 00:12:28.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh: line 91: kill: (211606) - No such process 00:12:28.036 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:28.036 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:28.037 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:28.037 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:28.037 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:28.037 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:28.037 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:28.037 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:28.037 { 00:12:28.037 "params": { 00:12:28.037 "name": "Nvme$subsystem", 00:12:28.037 "trtype": "$TEST_TRANSPORT", 00:12:28.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:28.037 "adrfam": "ipv4", 00:12:28.037 "trsvcid": "$NVMF_PORT", 00:12:28.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:28.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:28.037 "hdgst": ${hdgst:-false}, 00:12:28.037 "ddgst": ${ddgst:-false} 00:12:28.037 }, 00:12:28.037 "method": "bdev_nvme_attach_controller" 00:12:28.037 } 00:12:28.037 EOF 00:12:28.037 )") 00:12:28.037 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:28.037 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:28.037 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:28.037 23:54:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:28.037 "params": { 00:12:28.037 "name": "Nvme0", 00:12:28.037 "trtype": "tcp", 00:12:28.037 "traddr": "10.0.0.2", 00:12:28.037 "adrfam": "ipv4", 00:12:28.037 "trsvcid": "4420", 00:12:28.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:28.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:28.037 "hdgst": false, 00:12:28.037 "ddgst": false 00:12:28.037 }, 00:12:28.037 "method": "bdev_nvme_attach_controller" 00:12:28.037 }' 00:12:28.037 [2024-12-09 23:54:02.642736] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:12:28.037 [2024-12-09 23:54:02.642785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid212081 ] 00:12:28.037 [2024-12-09 23:54:02.718053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.037 [2024-12-09 23:54:02.756693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.037 Running I/O for 1 seconds... 00:12:29.415 1984.00 IOPS, 124.00 MiB/s 00:12:29.415 Latency(us) 00:12:29.415 [2024-12-09T22:54:04.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.415 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:29.415 Verification LBA range: start 0x0 length 0x400 00:12:29.415 Nvme0n1 : 1.02 2004.97 125.31 0.00 0.00 31416.44 5157.40 27354.16 00:12:29.415 [2024-12-09T22:54:04.351Z] =================================================================================================================== 00:12:29.415 [2024-12-09T22:54:04.351Z] Total : 2004.97 125.31 0.00 0.00 31416.44 5157.40 27354.16 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.415 rmmod nvme_tcp 00:12:29.415 rmmod nvme_fabrics 00:12:29.415 rmmod nvme_keyring 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 211534 ']' 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 211534 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 211534 ']' 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 211534 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211534 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211534' 00:12:29.415 killing process with pid 211534 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 211534 00:12:29.415 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 211534 00:12:29.675 [2024-12-09 23:54:04.431437] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.675 23:54:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:32.216 00:12:32.216 real 0m12.317s 00:12:32.216 user 0m19.323s 00:12:32.216 sys 0m5.572s 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:32.216 ************************************ 00:12:32.216 END TEST nvmf_host_management 00:12:32.216 ************************************ 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:32.216 ************************************ 00:12:32.216 START TEST nvmf_lvol 00:12:32.216 ************************************ 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:32.216 * Looking for test storage... 00:12:32.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:32.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.216 --rc genhtml_branch_coverage=1 00:12:32.216 --rc genhtml_function_coverage=1 00:12:32.216 --rc genhtml_legend=1 00:12:32.216 --rc geninfo_all_blocks=1 00:12:32.216 --rc geninfo_unexecuted_blocks=1 00:12:32.216 00:12:32.216 ' 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:32.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.216 --rc genhtml_branch_coverage=1 00:12:32.216 --rc genhtml_function_coverage=1 00:12:32.216 --rc genhtml_legend=1 00:12:32.216 --rc geninfo_all_blocks=1 00:12:32.216 --rc geninfo_unexecuted_blocks=1 00:12:32.216 00:12:32.216 ' 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:32.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.216 --rc genhtml_branch_coverage=1 00:12:32.216 --rc genhtml_function_coverage=1 00:12:32.216 --rc genhtml_legend=1 00:12:32.216 --rc geninfo_all_blocks=1 00:12:32.216 --rc geninfo_unexecuted_blocks=1 00:12:32.216 00:12:32.216 ' 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:32.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.216 --rc genhtml_branch_coverage=1 00:12:32.216 --rc genhtml_function_coverage=1 00:12:32.216 --rc genhtml_legend=1 00:12:32.216 --rc geninfo_all_blocks=1 00:12:32.216 --rc geninfo_unexecuted_blocks=1 00:12:32.216 00:12:32.216 ' 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.216 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.217 23:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:38.796 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.796 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:38.796 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:38.797 Found net devices under 0000:86:00.0: cvl_0_0 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:38.797 Found net devices under 0000:86:00.1: cvl_0_1 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:38.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:12:38.797 00:12:38.797 --- 10.0.0.2 ping statistics --- 00:12:38.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.797 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:12:38.797 00:12:38.797 --- 10.0.0.1 ping statistics --- 00:12:38.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.797 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=216304 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 216304 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 216304 ']' 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.797 23:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:38.797 [2024-12-09 23:54:12.850419] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:12:38.797 [2024-12-09 23:54:12.850462] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.797 [2024-12-09 23:54:12.930072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:38.797 [2024-12-09 23:54:12.971258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.797 [2024-12-09 23:54:12.971292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.797 [2024-12-09 23:54:12.971300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.797 [2024-12-09 23:54:12.971306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.797 [2024-12-09 23:54:12.971311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.797 [2024-12-09 23:54:12.972592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.797 [2024-12-09 23:54:12.972621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.797 [2024-12-09 23:54:12.972620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.797 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.797 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:12:38.797 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.797 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.797 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:38.797 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.797 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:38.797 [2024-12-09 23:54:13.290435] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.797 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:38.797 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:38.797 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:39.056 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:39.056 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:39.056 23:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:39.315 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a7b076e0-bf23-47d3-aa4b-f97da22cdc81 00:12:39.315 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u a7b076e0-bf23-47d3-aa4b-f97da22cdc81 lvol 20 00:12:39.574 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a4cf56bd-47d4-4b47-b301-642254d535b5 00:12:39.574 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:39.833 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a4cf56bd-47d4-4b47-b301-642254d535b5 00:12:40.092 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:40.092 [2024-12-09 23:54:14.956613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.092 23:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:40.351 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=216759 00:12:40.351 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:40.351 23:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:41.286 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_snapshot a4cf56bd-47d4-4b47-b301-642254d535b5 MY_SNAPSHOT 00:12:41.544 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=281eacd2-4b10-4a24-8b00-1c2d8bb5a7fe 00:12:41.545 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_resize a4cf56bd-47d4-4b47-b301-642254d535b5 30 00:12:41.803 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_clone 281eacd2-4b10-4a24-8b00-1c2d8bb5a7fe MY_CLONE 00:12:42.075 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=364ac711-cd0c-4609-8a4b-18bde3aedc81 00:12:42.075 23:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_inflate 364ac711-cd0c-4609-8a4b-18bde3aedc81 00:12:42.643 23:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 216759 00:12:50.760 Initializing NVMe Controllers 00:12:50.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:50.760 Controller IO queue size 128, less than required. 00:12:50.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:50.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:50.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:50.760 Initialization complete. Launching workers. 00:12:50.760 ======================================================== 00:12:50.760 Latency(us) 00:12:50.760 Device Information : IOPS MiB/s Average min max 00:12:50.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11904.80 46.50 10753.42 1613.05 61637.17 00:12:50.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11991.50 46.84 10679.02 2932.62 62057.29 00:12:50.760 ======================================================== 00:12:50.760 Total : 23896.30 93.34 10716.09 1613.05 62057.29 00:12:50.760 00:12:50.760 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:51.018 23:54:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete a4cf56bd-47d4-4b47-b301-642254d535b5 00:12:51.277 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a7b076e0-bf23-47d3-aa4b-f97da22cdc81 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:51.536 rmmod nvme_tcp 00:12:51.536 rmmod nvme_fabrics 00:12:51.536 rmmod nvme_keyring 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 216304 ']' 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 216304 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 216304 ']' 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 216304 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216304 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216304' 00:12:51.536 killing process with pid 216304 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 216304 00:12:51.536 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 216304 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.796 23:54:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:54.337 00:12:54.337 real 0m22.048s 00:12:54.337 user 1m3.648s 00:12:54.337 sys 0m7.553s 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:54.337 ************************************ 00:12:54.337 END TEST nvmf_lvol 00:12:54.337 ************************************ 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:54.337 ************************************ 00:12:54.337 START TEST nvmf_lvs_grow 00:12:54.337 ************************************ 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:54.337 * Looking for test storage... 00:12:54.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:54.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.337 --rc genhtml_branch_coverage=1 00:12:54.337 --rc genhtml_function_coverage=1 00:12:54.337 --rc genhtml_legend=1 00:12:54.337 --rc geninfo_all_blocks=1 00:12:54.337 --rc geninfo_unexecuted_blocks=1 00:12:54.337 00:12:54.337 ' 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:54.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.337 --rc genhtml_branch_coverage=1 00:12:54.337 --rc genhtml_function_coverage=1 00:12:54.337 --rc genhtml_legend=1 00:12:54.337 --rc geninfo_all_blocks=1 00:12:54.337 --rc geninfo_unexecuted_blocks=1 00:12:54.337 00:12:54.337 ' 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:54.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.337 --rc genhtml_branch_coverage=1 00:12:54.337 --rc genhtml_function_coverage=1 00:12:54.337 --rc genhtml_legend=1 00:12:54.337 --rc geninfo_all_blocks=1 00:12:54.337 --rc geninfo_unexecuted_blocks=1 00:12:54.337 00:12:54.337 ' 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:54.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.337 --rc genhtml_branch_coverage=1 00:12:54.337 --rc genhtml_function_coverage=1 00:12:54.337 --rc genhtml_legend=1 00:12:54.337 --rc geninfo_all_blocks=1 00:12:54.337 --rc geninfo_unexecuted_blocks=1 00:12:54.337 00:12:54.337 ' 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.337 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:54.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:12:54.338 23:54:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:00.927 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:00.927 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:00.927 Found net devices under 0000:86:00.0: cvl_0_0 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:00.927 Found net devices under 0000:86:00.1: cvl_0_1 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:00.927 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:00.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:13:00.928 00:13:00.928 --- 10.0.0.2 ping statistics --- 00:13:00.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.928 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:13:00.928 00:13:00.928 --- 10.0.0.1 ping statistics --- 00:13:00.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.928 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=222188 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 222188 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 222188 ']' 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.928 23:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:00.928 [2024-12-09 23:54:35.016968] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:13:00.928 [2024-12-09 23:54:35.017013] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.928 [2024-12-09 23:54:35.095922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.928 [2024-12-09 23:54:35.136293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.928 [2024-12-09 23:54:35.136325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.928 [2024-12-09 23:54:35.136332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.928 [2024-12-09 23:54:35.136341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.928 [2024-12-09 23:54:35.136346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.928 [2024-12-09 23:54:35.136842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:00.928 [2024-12-09 23:54:35.438014] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:00.928 ************************************ 00:13:00.928 START TEST lvs_grow_clean 00:13:00.928 ************************************ 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:00.928 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:01.200 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:01.200 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:01.200 23:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:01.460 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:01.460 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:01.460 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc lvol 150 00:13:01.460 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7bc5e148-97fb-4916-8934-8b144d878665 00:13:01.460 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:13:01.460 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:01.719 [2024-12-09 23:54:36.511868] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:01.719 [2024-12-09 23:54:36.511913] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:01.719 true 00:13:01.719 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:01.719 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:01.979 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:01.979 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:01.979 23:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7bc5e148-97fb-4916-8934-8b144d878665 00:13:02.238 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:02.497 [2024-12-09 23:54:37.270122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.497 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:02.757 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=222684 00:13:02.757 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:02.757 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:02.757 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 222684 /var/tmp/bdevperf.sock 00:13:02.757 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 222684 ']' 00:13:02.757 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:02.757 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.757 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:02.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:02.757 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.757 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:02.757 [2024-12-09 23:54:37.517020] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:13:02.757 [2024-12-09 23:54:37.517065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222684 ] 00:13:02.757 [2024-12-09 23:54:37.591873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.757 [2024-12-09 23:54:37.631354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.016 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.016 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:13:03.016 23:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:03.296 Nvme0n1 00:13:03.296 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:03.555 [ 00:13:03.555 { 00:13:03.555 "name": "Nvme0n1", 00:13:03.555 "aliases": [ 00:13:03.555 "7bc5e148-97fb-4916-8934-8b144d878665" 00:13:03.555 ], 00:13:03.555 "product_name": "NVMe disk", 00:13:03.555 "block_size": 4096, 00:13:03.555 "num_blocks": 38912, 00:13:03.555 "uuid": "7bc5e148-97fb-4916-8934-8b144d878665", 00:13:03.555 "numa_id": 1, 00:13:03.555 "assigned_rate_limits": { 00:13:03.555 "rw_ios_per_sec": 0, 00:13:03.555 "rw_mbytes_per_sec": 0, 00:13:03.555 "r_mbytes_per_sec": 0, 00:13:03.555 "w_mbytes_per_sec": 0 00:13:03.555 }, 00:13:03.555 "claimed": false, 00:13:03.555 "zoned": false, 00:13:03.555 "supported_io_types": { 00:13:03.555 "read": true, 00:13:03.555 "write": true, 00:13:03.555 "unmap": true, 00:13:03.555 "flush": true, 00:13:03.555 "reset": true, 00:13:03.555 "nvme_admin": true, 00:13:03.555 "nvme_io": true, 00:13:03.555 "nvme_io_md": false, 00:13:03.555 "write_zeroes": true, 00:13:03.555 "zcopy": false, 00:13:03.555 "get_zone_info": false, 00:13:03.555 "zone_management": false, 00:13:03.555 "zone_append": false, 00:13:03.555 "compare": true, 00:13:03.555 "compare_and_write": true, 00:13:03.555 "abort": true, 00:13:03.555 "seek_hole": false, 00:13:03.555 "seek_data": false, 00:13:03.555 "copy": true, 00:13:03.555 "nvme_iov_md": false 00:13:03.555 }, 00:13:03.555 "memory_domains": [ 00:13:03.555 { 00:13:03.555 "dma_device_id": "system", 00:13:03.555 "dma_device_type": 1 00:13:03.555 } 00:13:03.555 ], 00:13:03.555 "driver_specific": { 00:13:03.555 "nvme": [ 00:13:03.555 { 00:13:03.555 "trid": { 00:13:03.555 "trtype": "TCP", 00:13:03.555 "adrfam": "IPv4", 00:13:03.555 "traddr": "10.0.0.2", 00:13:03.555 "trsvcid": "4420", 00:13:03.555 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:03.555 }, 00:13:03.555 "ctrlr_data": { 00:13:03.555 "cntlid": 1, 00:13:03.555 "vendor_id": "0x8086", 00:13:03.555 "model_number": "SPDK bdev Controller", 00:13:03.555 "serial_number": "SPDK0", 00:13:03.555 "firmware_revision": "25.01", 00:13:03.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:03.555 "oacs": { 00:13:03.555 "security": 0, 00:13:03.555 "format": 0, 00:13:03.555 "firmware": 0, 00:13:03.555 "ns_manage": 0 00:13:03.555 }, 00:13:03.555 "multi_ctrlr": true, 00:13:03.555 "ana_reporting": false 00:13:03.555 }, 00:13:03.556 "vs": { 00:13:03.556 "nvme_version": "1.3" 00:13:03.556 }, 00:13:03.556 "ns_data": { 00:13:03.556 "id": 1, 00:13:03.556 "can_share": true 00:13:03.556 } 00:13:03.556 } 00:13:03.556 ], 00:13:03.556 "mp_policy": "active_passive" 00:13:03.556 } 00:13:03.556 } 00:13:03.556 ] 00:13:03.556 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=222704 00:13:03.556 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:03.556 23:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:03.556 Running I/O for 10 seconds... 00:13:04.931 Latency(us) 00:13:04.931 [2024-12-09T22:54:39.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:04.931 Nvme0n1 : 1.00 22800.00 89.06 0.00 0.00 0.00 0.00 0.00 00:13:04.931 [2024-12-09T22:54:39.867Z] =================================================================================================================== 00:13:04.931 [2024-12-09T22:54:39.867Z] Total : 22800.00 89.06 0.00 0.00 0.00 0.00 0.00 00:13:04.931 00:13:05.498 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:05.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.757 Nvme0n1 : 2.00 22833.50 89.19 0.00 0.00 0.00 0.00 0.00 00:13:05.757 [2024-12-09T22:54:40.693Z] =================================================================================================================== 00:13:05.757 [2024-12-09T22:54:40.693Z] Total : 22833.50 89.19 0.00 0.00 0.00 0.00 0.00 00:13:05.757 00:13:05.757 true 00:13:05.757 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:05.757 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:06.016 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:06.016 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:06.016 23:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 222704 00:13:06.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:06.583 Nvme0n1 : 3.00 22913.33 89.51 0.00 0.00 0.00 0.00 0.00 00:13:06.583 [2024-12-09T22:54:41.519Z] =================================================================================================================== 00:13:06.583 [2024-12-09T22:54:41.519Z] Total : 22913.33 89.51 0.00 0.00 0.00 0.00 0.00 00:13:06.583 00:13:07.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:07.959 Nvme0n1 : 4.00 23012.25 89.89 0.00 0.00 0.00 0.00 0.00 00:13:07.959 [2024-12-09T22:54:42.895Z] =================================================================================================================== 00:13:07.959 [2024-12-09T22:54:42.895Z] Total : 23012.25 89.89 0.00 0.00 0.00 0.00 0.00 00:13:07.959 00:13:08.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.527 Nvme0n1 : 5.00 23076.00 90.14 0.00 0.00 0.00 0.00 0.00 00:13:08.527 [2024-12-09T22:54:43.463Z] =================================================================================================================== 00:13:08.527 [2024-12-09T22:54:43.463Z] Total : 23076.00 90.14 0.00 0.00 0.00 0.00 0.00 00:13:08.527 00:13:09.903 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.903 Nvme0n1 : 6.00 23119.83 90.31 0.00 0.00 0.00 0.00 0.00 00:13:09.903 [2024-12-09T22:54:44.839Z] =================================================================================================================== 00:13:09.903 [2024-12-09T22:54:44.839Z] Total : 23119.83 90.31 0.00 0.00 0.00 0.00 0.00 00:13:09.903 00:13:10.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.838 Nvme0n1 : 7.00 23155.86 90.45 0.00 0.00 0.00 0.00 0.00 00:13:10.838 [2024-12-09T22:54:45.774Z] =================================================================================================================== 00:13:10.838 [2024-12-09T22:54:45.774Z] Total : 23155.86 90.45 0.00 0.00 0.00 0.00 0.00 00:13:10.838 00:13:11.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.775 Nvme0n1 : 8.00 23180.25 90.55 0.00 0.00 0.00 0.00 0.00 00:13:11.775 [2024-12-09T22:54:46.711Z] =================================================================================================================== 00:13:11.775 [2024-12-09T22:54:46.711Z] Total : 23180.25 90.55 0.00 0.00 0.00 0.00 0.00 00:13:11.775 00:13:12.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.710 Nvme0n1 : 9.00 23202.67 90.64 0.00 0.00 0.00 0.00 0.00 00:13:12.710 [2024-12-09T22:54:47.646Z] =================================================================================================================== 00:13:12.710 [2024-12-09T22:54:47.646Z] Total : 23202.67 90.64 0.00 0.00 0.00 0.00 0.00 00:13:12.710 00:13:13.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.645 Nvme0n1 : 10.00 23223.00 90.71 0.00 0.00 0.00 0.00 0.00 00:13:13.645 [2024-12-09T22:54:48.581Z] =================================================================================================================== 00:13:13.645 [2024-12-09T22:54:48.581Z] Total : 23223.00 90.71 0.00 0.00 0.00 0.00 0.00 00:13:13.645 00:13:13.645 00:13:13.646 Latency(us) 00:13:13.646 [2024-12-09T22:54:48.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.646 Nvme0n1 : 10.00 23224.88 90.72 0.00 0.00 5508.52 3191.32 11853.47 00:13:13.646 [2024-12-09T22:54:48.582Z] =================================================================================================================== 00:13:13.646 [2024-12-09T22:54:48.582Z] Total : 23224.88 90.72 0.00 0.00 5508.52 3191.32 11853.47 00:13:13.646 { 00:13:13.646 "results": [ 00:13:13.646 { 00:13:13.646 "job": "Nvme0n1", 00:13:13.646 "core_mask": "0x2", 00:13:13.646 "workload": "randwrite", 00:13:13.646 "status": "finished", 00:13:13.646 "queue_depth": 128, 00:13:13.646 "io_size": 4096, 00:13:13.646 "runtime": 10.004702, 00:13:13.646 "iops": 23224.879661583123, 00:13:13.646 "mibps": 90.72218617805908, 00:13:13.646 "io_failed": 0, 00:13:13.646 "io_timeout": 0, 00:13:13.646 "avg_latency_us": 5508.522020914504, 00:13:13.646 "min_latency_us": 3191.318260869565, 00:13:13.646 "max_latency_us": 11853.467826086957 00:13:13.646 } 00:13:13.646 ], 00:13:13.646 "core_count": 1 00:13:13.646 } 00:13:13.646 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 222684 00:13:13.646 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 222684 ']' 00:13:13.646 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 222684 00:13:13.646 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:13:13.646 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.646 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 222684 00:13:13.646 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:13.646 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:13.646 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 222684' 00:13:13.646 killing process with pid 222684 00:13:13.646 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 222684 00:13:13.646 Received shutdown signal, test time was about 10.000000 seconds 00:13:13.646 00:13:13.646 Latency(us) 00:13:13.646 [2024-12-09T22:54:48.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.646 [2024-12-09T22:54:48.582Z] =================================================================================================================== 00:13:13.646 [2024-12-09T22:54:48.582Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:13.646 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 222684 00:13:13.905 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:14.165 23:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:14.425 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:14.425 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:14.425 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:14.425 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:14.425 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:14.684 [2024-12-09 23:54:49.472818] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:13:14.684 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:14.944 request: 00:13:14.944 { 00:13:14.944 "uuid": "0a44305e-53e3-4b1e-98f6-6e011cbe15cc", 00:13:14.944 "method": "bdev_lvol_get_lvstores", 00:13:14.944 "req_id": 1 00:13:14.944 } 00:13:14.944 Got JSON-RPC error response 00:13:14.944 response: 00:13:14.944 { 00:13:14.944 "code": -19, 00:13:14.944 "message": "No such device" 00:13:14.944 } 00:13:14.944 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:13:14.944 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:14.944 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:14.944 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:14.944 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:15.203 aio_bdev 00:13:15.203 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7bc5e148-97fb-4916-8934-8b144d878665 00:13:15.203 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7bc5e148-97fb-4916-8934-8b144d878665 00:13:15.203 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.203 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:13:15.203 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.203 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.203 23:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:15.203 23:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 7bc5e148-97fb-4916-8934-8b144d878665 -t 2000 00:13:15.463 [ 00:13:15.463 { 00:13:15.463 "name": "7bc5e148-97fb-4916-8934-8b144d878665", 00:13:15.463 "aliases": [ 00:13:15.463 "lvs/lvol" 00:13:15.463 ], 00:13:15.463 "product_name": "Logical Volume", 00:13:15.463 "block_size": 4096, 00:13:15.463 "num_blocks": 38912, 00:13:15.463 "uuid": "7bc5e148-97fb-4916-8934-8b144d878665", 00:13:15.463 "assigned_rate_limits": { 00:13:15.463 "rw_ios_per_sec": 0, 00:13:15.463 "rw_mbytes_per_sec": 0, 00:13:15.463 "r_mbytes_per_sec": 0, 00:13:15.463 "w_mbytes_per_sec": 0 00:13:15.463 }, 00:13:15.463 "claimed": false, 00:13:15.463 "zoned": false, 00:13:15.463 "supported_io_types": { 00:13:15.463 "read": true, 00:13:15.463 "write": true, 00:13:15.463 "unmap": true, 00:13:15.463 "flush": false, 00:13:15.463 "reset": true, 00:13:15.463 "nvme_admin": false, 00:13:15.463 "nvme_io": false, 00:13:15.463 "nvme_io_md": false, 00:13:15.463 "write_zeroes": true, 00:13:15.463 "zcopy": false, 00:13:15.463 "get_zone_info": false, 00:13:15.463 "zone_management": false, 00:13:15.463 "zone_append": false, 00:13:15.463 "compare": false, 00:13:15.463 "compare_and_write": false, 00:13:15.463 "abort": false, 00:13:15.463 "seek_hole": true, 00:13:15.463 "seek_data": true, 00:13:15.463 "copy": false, 00:13:15.463 "nvme_iov_md": false 00:13:15.463 }, 00:13:15.463 "driver_specific": { 00:13:15.463 "lvol": { 00:13:15.463 "lvol_store_uuid": "0a44305e-53e3-4b1e-98f6-6e011cbe15cc", 00:13:15.463 "base_bdev": "aio_bdev", 00:13:15.463 "thin_provision": false, 00:13:15.463 "num_allocated_clusters": 38, 00:13:15.463 "snapshot": false, 00:13:15.463 "clone": false, 00:13:15.463 "esnap_clone": false 00:13:15.463 } 00:13:15.463 } 00:13:15.463 } 00:13:15.463 ] 00:13:15.463 23:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:13:15.463 23:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:15.463 23:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:15.740 23:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:15.740 23:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:15.740 23:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:15.999 23:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:15.999 23:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 7bc5e148-97fb-4916-8934-8b144d878665 00:13:15.999 23:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a44305e-53e3-4b1e-98f6-6e011cbe15cc 00:13:16.258 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:13:16.517 00:13:16.517 real 0m15.843s 00:13:16.517 user 0m15.376s 00:13:16.517 sys 0m1.521s 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:16.517 ************************************ 00:13:16.517 END TEST lvs_grow_clean 00:13:16.517 ************************************ 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:16.517 ************************************ 00:13:16.517 START TEST lvs_grow_dirty 00:13:16.517 ************************************ 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:13:16.517 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:16.776 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:16.776 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:17.036 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:17.036 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:17.036 23:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:17.295 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:17.295 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:17.295 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c lvol 150 00:13:17.295 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ceed18cf-347a-4c88-a26f-cf42e9f23116 00:13:17.295 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:13:17.295 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:17.554 [2024-12-09 23:54:52.402265] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:17.554 [2024-12-09 23:54:52.402312] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:17.554 true 00:13:17.554 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:17.554 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:17.813 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:17.813 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:18.072 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ceed18cf-347a-4c88-a26f-cf42e9f23116 00:13:18.073 23:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:18.332 [2024-12-09 23:54:53.160518] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.332 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.591 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=225295 00:13:18.591 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:18.591 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:18.591 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 225295 /var/tmp/bdevperf.sock 00:13:18.591 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 225295 ']' 00:13:18.591 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:18.591 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.591 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:18.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:18.591 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.591 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:18.591 [2024-12-09 23:54:53.396377] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:13:18.591 [2024-12-09 23:54:53.396422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid225295 ] 00:13:18.591 [2024-12-09 23:54:53.469552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.591 [2024-12-09 23:54:53.509001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.850 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.850 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:13:18.850 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:19.108 Nvme0n1 00:13:19.108 23:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:19.367 [ 00:13:19.367 { 00:13:19.367 "name": "Nvme0n1", 00:13:19.367 "aliases": [ 00:13:19.367 "ceed18cf-347a-4c88-a26f-cf42e9f23116" 00:13:19.367 ], 00:13:19.367 "product_name": "NVMe disk", 00:13:19.367 "block_size": 4096, 00:13:19.367 "num_blocks": 38912, 00:13:19.367 "uuid": "ceed18cf-347a-4c88-a26f-cf42e9f23116", 00:13:19.367 "numa_id": 1, 00:13:19.367 "assigned_rate_limits": { 00:13:19.367 "rw_ios_per_sec": 0, 00:13:19.367 "rw_mbytes_per_sec": 0, 00:13:19.367 "r_mbytes_per_sec": 0, 00:13:19.367 "w_mbytes_per_sec": 0 00:13:19.367 }, 00:13:19.367 "claimed": false, 00:13:19.367 "zoned": false, 00:13:19.367 "supported_io_types": { 00:13:19.367 "read": true, 00:13:19.367 "write": true, 00:13:19.367 "unmap": true, 00:13:19.367 "flush": true, 00:13:19.367 "reset": true, 00:13:19.367 "nvme_admin": true, 00:13:19.367 "nvme_io": true, 00:13:19.367 "nvme_io_md": false, 00:13:19.367 "write_zeroes": true, 00:13:19.367 "zcopy": false, 00:13:19.367 "get_zone_info": false, 00:13:19.367 "zone_management": false, 00:13:19.367 "zone_append": false, 00:13:19.367 "compare": true, 00:13:19.367 "compare_and_write": true, 00:13:19.367 "abort": true, 00:13:19.367 "seek_hole": false, 00:13:19.367 "seek_data": false, 00:13:19.367 "copy": true, 00:13:19.367 "nvme_iov_md": false 00:13:19.367 }, 00:13:19.367 "memory_domains": [ 00:13:19.367 { 00:13:19.368 "dma_device_id": "system", 00:13:19.368 "dma_device_type": 1 00:13:19.368 } 00:13:19.368 ], 00:13:19.368 "driver_specific": { 00:13:19.368 "nvme": [ 00:13:19.368 { 00:13:19.368 "trid": { 00:13:19.368 "trtype": "TCP", 00:13:19.368 "adrfam": "IPv4", 00:13:19.368 "traddr": "10.0.0.2", 00:13:19.368 "trsvcid": "4420", 00:13:19.368 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:19.368 }, 00:13:19.368 "ctrlr_data": { 00:13:19.368 "cntlid": 1, 00:13:19.368 "vendor_id": "0x8086", 00:13:19.368 "model_number": "SPDK bdev Controller", 00:13:19.368 "serial_number": "SPDK0", 00:13:19.368 "firmware_revision": "25.01", 00:13:19.368 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:19.368 "oacs": { 00:13:19.368 "security": 0, 00:13:19.368 "format": 0, 00:13:19.368 "firmware": 0, 00:13:19.368 "ns_manage": 0 00:13:19.368 }, 00:13:19.368 "multi_ctrlr": true, 00:13:19.368 "ana_reporting": false 00:13:19.368 }, 00:13:19.368 "vs": { 00:13:19.368 "nvme_version": "1.3" 00:13:19.368 }, 00:13:19.368 "ns_data": { 00:13:19.368 "id": 1, 00:13:19.368 "can_share": true 00:13:19.368 } 00:13:19.368 } 00:13:19.368 ], 00:13:19.368 "mp_policy": "active_passive" 00:13:19.368 } 00:13:19.368 } 00:13:19.368 ] 00:13:19.368 23:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=225519 00:13:19.368 23:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:19.368 23:54:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:19.368 Running I/O for 10 seconds... 00:13:20.311 Latency(us) 00:13:20.311 [2024-12-09T22:54:55.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.311 Nvme0n1 : 1.00 22897.00 89.44 0.00 0.00 0.00 0.00 0.00 00:13:20.311 [2024-12-09T22:54:55.247Z] =================================================================================================================== 00:13:20.311 [2024-12-09T22:54:55.247Z] Total : 22897.00 89.44 0.00 0.00 0.00 0.00 0.00 00:13:20.311 00:13:21.247 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:21.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.505 Nvme0n1 : 2.00 23036.50 89.99 0.00 0.00 0.00 0.00 0.00 00:13:21.505 [2024-12-09T22:54:56.441Z] =================================================================================================================== 00:13:21.505 [2024-12-09T22:54:56.441Z] Total : 23036.50 89.99 0.00 0.00 0.00 0.00 0.00 00:13:21.505 00:13:21.505 true 00:13:21.505 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:21.505 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:21.764 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:21.764 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:21.764 23:54:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 225519 00:13:22.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.331 Nvme0n1 : 3.00 23062.33 90.09 0.00 0.00 0.00 0.00 0.00 00:13:22.331 [2024-12-09T22:54:57.267Z] =================================================================================================================== 00:13:22.331 [2024-12-09T22:54:57.267Z] Total : 23062.33 90.09 0.00 0.00 0.00 0.00 0.00 00:13:22.331 00:13:23.267 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:23.267 Nvme0n1 : 4.00 23117.75 90.30 0.00 0.00 0.00 0.00 0.00 00:13:23.267 [2024-12-09T22:54:58.203Z] =================================================================================================================== 00:13:23.267 [2024-12-09T22:54:58.203Z] Total : 23117.75 90.30 0.00 0.00 0.00 0.00 0.00 00:13:23.267 00:13:24.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.642 Nvme0n1 : 5.00 23173.40 90.52 0.00 0.00 0.00 0.00 0.00 00:13:24.642 [2024-12-09T22:54:59.578Z] =================================================================================================================== 00:13:24.642 [2024-12-09T22:54:59.578Z] Total : 23173.40 90.52 0.00 0.00 0.00 0.00 0.00 00:13:24.642 00:13:25.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:25.578 Nvme0n1 : 6.00 23175.17 90.53 0.00 0.00 0.00 0.00 0.00 00:13:25.578 [2024-12-09T22:55:00.514Z] =================================================================================================================== 00:13:25.578 [2024-12-09T22:55:00.514Z] Total : 23175.17 90.53 0.00 0.00 0.00 0.00 0.00 00:13:25.578 00:13:26.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.514 Nvme0n1 : 7.00 23185.43 90.57 0.00 0.00 0.00 0.00 0.00 00:13:26.514 [2024-12-09T22:55:01.450Z] =================================================================================================================== 00:13:26.514 [2024-12-09T22:55:01.450Z] Total : 23185.43 90.57 0.00 0.00 0.00 0.00 0.00 00:13:26.514 00:13:27.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.881 Nvme0n1 : 8.00 23216.75 90.69 0.00 0.00 0.00 0.00 0.00 00:13:27.881 [2024-12-09T22:55:02.817Z] =================================================================================================================== 00:13:27.881 [2024-12-09T22:55:02.817Z] Total : 23216.75 90.69 0.00 0.00 0.00 0.00 0.00 00:13:27.881 00:13:28.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:28.450 Nvme0n1 : 9.00 23237.89 90.77 0.00 0.00 0.00 0.00 0.00 00:13:28.450 [2024-12-09T22:55:03.386Z] =================================================================================================================== 00:13:28.450 [2024-12-09T22:55:03.386Z] Total : 23237.89 90.77 0.00 0.00 0.00 0.00 0.00 00:13:28.450 00:13:29.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.399 Nvme0n1 : 10.00 23252.50 90.83 0.00 0.00 0.00 0.00 0.00 00:13:29.399 [2024-12-09T22:55:04.335Z] =================================================================================================================== 00:13:29.399 [2024-12-09T22:55:04.335Z] Total : 23252.50 90.83 0.00 0.00 0.00 0.00 0.00 00:13:29.399 00:13:29.399 00:13:29.399 Latency(us) 00:13:29.399 [2024-12-09T22:55:04.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.399 Nvme0n1 : 10.01 23258.69 90.85 0.00 0.00 5499.98 3191.32 11169.61 00:13:29.399 [2024-12-09T22:55:04.335Z] =================================================================================================================== 00:13:29.399 [2024-12-09T22:55:04.335Z] Total : 23258.69 90.85 0.00 0.00 5499.98 3191.32 11169.61 00:13:29.399 { 00:13:29.399 "results": [ 00:13:29.399 { 00:13:29.399 "job": "Nvme0n1", 00:13:29.399 "core_mask": "0x2", 00:13:29.399 "workload": "randwrite", 00:13:29.399 "status": "finished", 00:13:29.399 "queue_depth": 128, 00:13:29.399 "io_size": 4096, 00:13:29.399 "runtime": 10.00684, 00:13:29.399 "iops": 23258.691055318162, 00:13:29.399 "mibps": 90.85426193483657, 00:13:29.399 "io_failed": 0, 00:13:29.399 "io_timeout": 0, 00:13:29.399 "avg_latency_us": 5499.97930319262, 00:13:29.399 "min_latency_us": 3191.318260869565, 00:13:29.399 "max_latency_us": 11169.613913043479 00:13:29.399 } 00:13:29.399 ], 00:13:29.399 "core_count": 1 00:13:29.399 } 00:13:29.399 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 225295 00:13:29.399 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 225295 ']' 00:13:29.399 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 225295 00:13:29.399 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:13:29.399 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.399 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 225295 00:13:29.400 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:29.400 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:29.400 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 225295' 00:13:29.400 killing process with pid 225295 00:13:29.400 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 225295 00:13:29.400 Received shutdown signal, test time was about 10.000000 seconds 00:13:29.400 00:13:29.400 Latency(us) 00:13:29.400 [2024-12-09T22:55:04.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.400 [2024-12-09T22:55:04.336Z] =================================================================================================================== 00:13:29.400 [2024-12-09T22:55:04.336Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:29.400 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 225295 00:13:29.658 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:29.917 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:30.177 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:30.177 23:55:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:30.177 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:30.177 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:30.177 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 222188 00:13:30.177 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 222188 00:13:30.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 222188 Killed "${NVMF_APP[@]}" "$@" 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=227369 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 227369 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 227369 ']' 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:30.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.436 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:30.436 [2024-12-09 23:55:05.185734] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:13:30.436 [2024-12-09 23:55:05.185779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.436 [2024-12-09 23:55:05.261839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.436 [2024-12-09 23:55:05.299433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.436 [2024-12-09 23:55:05.299467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.436 [2024-12-09 23:55:05.299474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.436 [2024-12-09 23:55:05.299480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.436 [2024-12-09 23:55:05.299487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.436 [2024-12-09 23:55:05.300008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.695 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.695 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:13:30.695 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:30.695 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:30.695 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:30.695 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.695 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:30.695 [2024-12-09 23:55:05.617324] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:30.695 [2024-12-09 23:55:05.617425] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:30.695 [2024-12-09 23:55:05.617451] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:30.955 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:30.955 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ceed18cf-347a-4c88-a26f-cf42e9f23116 00:13:30.955 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ceed18cf-347a-4c88-a26f-cf42e9f23116 00:13:30.955 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.955 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:13:30.955 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.955 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.955 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:30.955 23:55:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b ceed18cf-347a-4c88-a26f-cf42e9f23116 -t 2000 00:13:31.213 [ 00:13:31.213 { 00:13:31.213 "name": "ceed18cf-347a-4c88-a26f-cf42e9f23116", 00:13:31.213 "aliases": [ 00:13:31.213 "lvs/lvol" 00:13:31.213 ], 00:13:31.213 "product_name": "Logical Volume", 00:13:31.213 "block_size": 4096, 00:13:31.213 "num_blocks": 38912, 00:13:31.213 "uuid": "ceed18cf-347a-4c88-a26f-cf42e9f23116", 00:13:31.213 "assigned_rate_limits": { 00:13:31.213 "rw_ios_per_sec": 0, 00:13:31.213 "rw_mbytes_per_sec": 0, 00:13:31.213 "r_mbytes_per_sec": 0, 00:13:31.213 "w_mbytes_per_sec": 0 00:13:31.213 }, 00:13:31.213 "claimed": false, 00:13:31.213 "zoned": false, 00:13:31.213 "supported_io_types": { 00:13:31.213 "read": true, 00:13:31.213 "write": true, 00:13:31.213 "unmap": true, 00:13:31.213 "flush": false, 00:13:31.213 "reset": true, 00:13:31.213 "nvme_admin": false, 00:13:31.213 "nvme_io": false, 00:13:31.213 "nvme_io_md": false, 00:13:31.213 "write_zeroes": true, 00:13:31.213 "zcopy": false, 00:13:31.213 "get_zone_info": false, 00:13:31.213 "zone_management": false, 00:13:31.213 "zone_append": false, 00:13:31.213 "compare": false, 00:13:31.213 "compare_and_write": false, 00:13:31.213 "abort": false, 00:13:31.213 "seek_hole": true, 00:13:31.213 "seek_data": true, 00:13:31.213 "copy": false, 00:13:31.213 "nvme_iov_md": false 00:13:31.213 }, 00:13:31.213 "driver_specific": { 00:13:31.213 "lvol": { 00:13:31.213 "lvol_store_uuid": "3fe34830-0e23-4341-b36d-2b45fe87ee4c", 00:13:31.213 "base_bdev": "aio_bdev", 00:13:31.213 "thin_provision": false, 00:13:31.213 "num_allocated_clusters": 38, 00:13:31.213 "snapshot": false, 00:13:31.213 "clone": false, 00:13:31.213 "esnap_clone": false 00:13:31.213 } 00:13:31.213 } 00:13:31.213 } 00:13:31.213 ] 00:13:31.213 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:13:31.213 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:31.213 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:31.471 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:31.471 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:31.471 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:31.471 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:31.471 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:31.730 [2024-12-09 23:55:06.578178] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:13:31.730 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:32.005 request: 00:13:32.005 { 00:13:32.005 "uuid": "3fe34830-0e23-4341-b36d-2b45fe87ee4c", 00:13:32.005 "method": "bdev_lvol_get_lvstores", 00:13:32.005 "req_id": 1 00:13:32.005 } 00:13:32.005 Got JSON-RPC error response 00:13:32.005 response: 00:13:32.005 { 00:13:32.005 "code": -19, 00:13:32.005 "message": "No such device" 00:13:32.005 } 00:13:32.005 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:13:32.005 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:32.005 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:32.005 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:32.005 23:55:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:32.264 aio_bdev 00:13:32.264 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ceed18cf-347a-4c88-a26f-cf42e9f23116 00:13:32.264 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ceed18cf-347a-4c88-a26f-cf42e9f23116 00:13:32.264 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:32.264 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:13:32.264 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:32.264 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:32.264 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:32.264 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b ceed18cf-347a-4c88-a26f-cf42e9f23116 -t 2000 00:13:32.523 [ 00:13:32.523 { 00:13:32.523 "name": "ceed18cf-347a-4c88-a26f-cf42e9f23116", 00:13:32.523 "aliases": [ 00:13:32.523 "lvs/lvol" 00:13:32.523 ], 00:13:32.523 "product_name": "Logical Volume", 00:13:32.523 "block_size": 4096, 00:13:32.523 "num_blocks": 38912, 00:13:32.523 "uuid": "ceed18cf-347a-4c88-a26f-cf42e9f23116", 00:13:32.523 "assigned_rate_limits": { 00:13:32.523 "rw_ios_per_sec": 0, 00:13:32.523 "rw_mbytes_per_sec": 0, 00:13:32.523 "r_mbytes_per_sec": 0, 00:13:32.523 "w_mbytes_per_sec": 0 00:13:32.523 }, 00:13:32.523 "claimed": false, 00:13:32.523 "zoned": false, 00:13:32.523 "supported_io_types": { 00:13:32.523 "read": true, 00:13:32.523 "write": true, 00:13:32.523 "unmap": true, 00:13:32.523 "flush": false, 00:13:32.523 "reset": true, 00:13:32.523 "nvme_admin": false, 00:13:32.523 "nvme_io": false, 00:13:32.523 "nvme_io_md": false, 00:13:32.523 "write_zeroes": true, 00:13:32.523 "zcopy": false, 00:13:32.523 "get_zone_info": false, 00:13:32.523 "zone_management": false, 00:13:32.523 "zone_append": false, 00:13:32.523 "compare": false, 00:13:32.523 "compare_and_write": false, 00:13:32.523 "abort": false, 00:13:32.523 "seek_hole": true, 00:13:32.523 "seek_data": true, 00:13:32.523 "copy": false, 00:13:32.524 "nvme_iov_md": false 00:13:32.524 }, 00:13:32.524 "driver_specific": { 00:13:32.524 "lvol": { 00:13:32.524 "lvol_store_uuid": "3fe34830-0e23-4341-b36d-2b45fe87ee4c", 00:13:32.524 "base_bdev": "aio_bdev", 00:13:32.524 "thin_provision": false, 00:13:32.524 "num_allocated_clusters": 38, 00:13:32.524 "snapshot": false, 00:13:32.524 "clone": false, 00:13:32.524 "esnap_clone": false 00:13:32.524 } 00:13:32.524 } 00:13:32.524 } 00:13:32.524 ] 00:13:32.524 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:13:32.524 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:32.524 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:32.783 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:32.783 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:32.783 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:33.078 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:33.078 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete ceed18cf-347a-4c88-a26f-cf42e9f23116 00:13:33.078 23:55:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fe34830-0e23-4341-b36d-2b45fe87ee4c 00:13:33.337 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:13:33.596 00:13:33.596 real 0m16.978s 00:13:33.596 user 0m44.058s 00:13:33.596 sys 0m3.756s 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:33.596 ************************************ 00:13:33.596 END TEST lvs_grow_dirty 00:13:33.596 ************************************ 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:33.596 nvmf_trace.0 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:33.596 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:33.596 rmmod nvme_tcp 00:13:33.596 rmmod nvme_fabrics 00:13:33.596 rmmod nvme_keyring 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 227369 ']' 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 227369 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 227369 ']' 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 227369 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227369 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227369' 00:13:33.856 killing process with pid 227369 00:13:33.856 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 227369 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 227369 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.857 23:55:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.396 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:36.396 00:13:36.396 real 0m42.103s 00:13:36.396 user 1m5.177s 00:13:36.396 sys 0m10.161s 00:13:36.396 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.396 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:36.396 ************************************ 00:13:36.396 END TEST nvmf_lvs_grow 00:13:36.396 ************************************ 00:13:36.396 23:55:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:36.396 23:55:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.396 23:55:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.396 23:55:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:36.396 ************************************ 00:13:36.396 START TEST nvmf_bdev_io_wait 00:13:36.396 ************************************ 00:13:36.396 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:36.396 * Looking for test storage... 00:13:36.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:36.396 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:36.396 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:13:36.396 23:55:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:36.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.396 --rc genhtml_branch_coverage=1 00:13:36.396 --rc genhtml_function_coverage=1 00:13:36.396 --rc genhtml_legend=1 00:13:36.396 --rc geninfo_all_blocks=1 00:13:36.396 --rc geninfo_unexecuted_blocks=1 00:13:36.396 00:13:36.396 ' 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:36.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.396 --rc genhtml_branch_coverage=1 00:13:36.396 --rc genhtml_function_coverage=1 00:13:36.396 --rc genhtml_legend=1 00:13:36.396 --rc geninfo_all_blocks=1 00:13:36.396 --rc geninfo_unexecuted_blocks=1 00:13:36.396 00:13:36.396 ' 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:36.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.396 --rc genhtml_branch_coverage=1 00:13:36.396 --rc genhtml_function_coverage=1 00:13:36.396 --rc genhtml_legend=1 00:13:36.396 --rc geninfo_all_blocks=1 00:13:36.396 --rc geninfo_unexecuted_blocks=1 00:13:36.396 00:13:36.396 ' 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:36.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:36.396 --rc genhtml_branch_coverage=1 00:13:36.396 --rc genhtml_function_coverage=1 00:13:36.396 --rc genhtml_legend=1 00:13:36.396 --rc geninfo_all_blocks=1 00:13:36.396 --rc geninfo_unexecuted_blocks=1 00:13:36.396 00:13:36.396 ' 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.396 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:36.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:13:36.397 23:55:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.981 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:42.982 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:42.982 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:42.982 Found net devices under 0000:86:00.0: cvl_0_0 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:42.982 Found net devices under 0000:86:00.1: cvl_0_1 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:42.982 23:55:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:42.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:13:42.982 00:13:42.982 --- 10.0.0.2 ping statistics --- 00:13:42.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.982 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:13:42.982 00:13:42.982 --- 10.0.0.1 ping statistics --- 00:13:42.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.982 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=231438 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 231438 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 231438 ']' 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.982 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.983 [2024-12-09 23:55:17.223256] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:13:42.983 [2024-12-09 23:55:17.223297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.983 [2024-12-09 23:55:17.303627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.983 [2024-12-09 23:55:17.344864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.983 [2024-12-09 23:55:17.344900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.983 [2024-12-09 23:55:17.344907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:42.983 [2024-12-09 23:55:17.344913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:42.983 [2024-12-09 23:55:17.344917] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.983 [2024-12-09 23:55:17.346488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.983 [2024-12-09 23:55:17.346601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.983 [2024-12-09 23:55:17.346709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.983 [2024-12-09 23:55:17.346710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.983 [2024-12-09 23:55:17.487096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.983 Malloc0 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:42.983 [2024-12-09 23:55:17.530641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=231616 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=231619 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:42.983 { 00:13:42.983 "params": { 00:13:42.983 "name": "Nvme$subsystem", 00:13:42.983 "trtype": "$TEST_TRANSPORT", 00:13:42.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.983 "adrfam": "ipv4", 00:13:42.983 "trsvcid": "$NVMF_PORT", 00:13:42.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.983 "hdgst": ${hdgst:-false}, 00:13:42.983 "ddgst": ${ddgst:-false} 00:13:42.983 }, 00:13:42.983 "method": "bdev_nvme_attach_controller" 00:13:42.983 } 00:13:42.983 EOF 00:13:42.983 )") 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=231622 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:42.983 { 00:13:42.983 "params": { 00:13:42.983 "name": "Nvme$subsystem", 00:13:42.983 "trtype": "$TEST_TRANSPORT", 00:13:42.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.983 "adrfam": "ipv4", 00:13:42.983 "trsvcid": "$NVMF_PORT", 00:13:42.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.983 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.983 "hdgst": ${hdgst:-false}, 00:13:42.983 "ddgst": ${ddgst:-false} 00:13:42.983 }, 00:13:42.983 "method": "bdev_nvme_attach_controller" 00:13:42.983 } 00:13:42.983 EOF 00:13:42.983 )") 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=231626 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:42.983 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:42.983 { 00:13:42.983 "params": { 00:13:42.983 "name": "Nvme$subsystem", 00:13:42.983 "trtype": "$TEST_TRANSPORT", 00:13:42.983 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.983 "adrfam": "ipv4", 00:13:42.983 "trsvcid": "$NVMF_PORT", 00:13:42.983 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.984 "hdgst": ${hdgst:-false}, 00:13:42.984 "ddgst": ${ddgst:-false} 00:13:42.984 }, 00:13:42.984 "method": "bdev_nvme_attach_controller" 00:13:42.984 } 00:13:42.984 EOF 00:13:42.984 )") 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:42.984 { 00:13:42.984 "params": { 00:13:42.984 "name": "Nvme$subsystem", 00:13:42.984 "trtype": "$TEST_TRANSPORT", 00:13:42.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.984 "adrfam": "ipv4", 00:13:42.984 "trsvcid": "$NVMF_PORT", 00:13:42.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.984 "hdgst": ${hdgst:-false}, 00:13:42.984 "ddgst": ${ddgst:-false} 00:13:42.984 }, 00:13:42.984 "method": "bdev_nvme_attach_controller" 00:13:42.984 } 00:13:42.984 EOF 00:13:42.984 )") 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 231616 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:42.984 "params": { 00:13:42.984 "name": "Nvme1", 00:13:42.984 "trtype": "tcp", 00:13:42.984 "traddr": "10.0.0.2", 00:13:42.984 "adrfam": "ipv4", 00:13:42.984 "trsvcid": "4420", 00:13:42.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.984 "hdgst": false, 00:13:42.984 "ddgst": false 00:13:42.984 }, 00:13:42.984 "method": "bdev_nvme_attach_controller" 00:13:42.984 }' 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:42.984 "params": { 00:13:42.984 "name": "Nvme1", 00:13:42.984 "trtype": "tcp", 00:13:42.984 "traddr": "10.0.0.2", 00:13:42.984 "adrfam": "ipv4", 00:13:42.984 "trsvcid": "4420", 00:13:42.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.984 "hdgst": false, 00:13:42.984 "ddgst": false 00:13:42.984 }, 00:13:42.984 "method": "bdev_nvme_attach_controller" 00:13:42.984 }' 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:42.984 "params": { 00:13:42.984 "name": "Nvme1", 00:13:42.984 "trtype": "tcp", 00:13:42.984 "traddr": "10.0.0.2", 00:13:42.984 "adrfam": "ipv4", 00:13:42.984 "trsvcid": "4420", 00:13:42.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.984 "hdgst": false, 00:13:42.984 "ddgst": false 00:13:42.984 }, 00:13:42.984 "method": "bdev_nvme_attach_controller" 00:13:42.984 }' 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:42.984 23:55:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:42.984 "params": { 00:13:42.984 "name": "Nvme1", 00:13:42.984 "trtype": "tcp", 00:13:42.984 "traddr": "10.0.0.2", 00:13:42.984 "adrfam": "ipv4", 00:13:42.984 "trsvcid": "4420", 00:13:42.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:42.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.984 "hdgst": false, 00:13:42.984 "ddgst": false 00:13:42.984 }, 00:13:42.984 "method": "bdev_nvme_attach_controller" 00:13:42.984 }' 00:13:42.984 [2024-12-09 23:55:17.581583] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:13:42.984 [2024-12-09 23:55:17.581636] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:42.984 [2024-12-09 23:55:17.585407] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:13:42.984 [2024-12-09 23:55:17.585449] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:42.984 [2024-12-09 23:55:17.587680] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:13:42.984 [2024-12-09 23:55:17.587680] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:13:42.984 [2024-12-09 23:55:17.587725] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 23:55:17.587725] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:42.984 --proc-type=auto ] 00:13:42.984 [2024-12-09 23:55:17.767051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.984 [2024-12-09 23:55:17.808699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:42.984 [2024-12-09 23:55:17.860775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.984 [2024-12-09 23:55:17.902444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:43.244 [2024-12-09 23:55:17.954201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.244 [2024-12-09 23:55:18.012087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:13:43.244 [2024-12-09 23:55:18.014845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.244 [2024-12-09 23:55:18.056612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:43.244 Running I/O for 1 seconds... 00:13:43.244 Running I/O for 1 seconds... 00:13:43.503 Running I/O for 1 seconds... 00:13:43.503 Running I/O for 1 seconds... 00:13:44.443 13596.00 IOPS, 53.11 MiB/s 00:13:44.443 Latency(us) 00:13:44.443 [2024-12-09T22:55:19.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.443 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:44.443 Nvme1n1 : 1.01 13656.00 53.34 0.00 0.00 9344.52 4843.97 16640.45 00:13:44.443 [2024-12-09T22:55:19.379Z] =================================================================================================================== 00:13:44.443 [2024-12-09T22:55:19.379Z] Total : 13656.00 53.34 0.00 0.00 9344.52 4843.97 16640.45 00:13:44.443 6328.00 IOPS, 24.72 MiB/s 00:13:44.443 Latency(us) 00:13:44.443 [2024-12-09T22:55:19.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.443 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:44.443 Nvme1n1 : 1.02 6350.37 24.81 0.00 0.00 19966.03 8434.20 29633.67 00:13:44.443 [2024-12-09T22:55:19.379Z] =================================================================================================================== 00:13:44.443 [2024-12-09T22:55:19.379Z] Total : 6350.37 24.81 0.00 0.00 19966.03 8434.20 29633.67 00:13:44.443 236672.00 IOPS, 924.50 MiB/s 00:13:44.443 Latency(us) 00:13:44.443 [2024-12-09T22:55:19.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.443 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:44.443 Nvme1n1 : 1.00 236307.11 923.07 0.00 0.00 538.71 231.51 1531.55 00:13:44.443 [2024-12-09T22:55:19.379Z] =================================================================================================================== 00:13:44.443 [2024-12-09T22:55:19.379Z] Total : 236307.11 923.07 0.00 0.00 538.71 231.51 1531.55 00:13:44.443 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 231619 00:13:44.443 6574.00 IOPS, 25.68 MiB/s 00:13:44.443 Latency(us) 00:13:44.443 [2024-12-09T22:55:19.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.443 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:44.443 Nvme1n1 : 1.01 6666.10 26.04 0.00 0.00 19139.28 4900.95 45818.21 00:13:44.443 [2024-12-09T22:55:19.379Z] =================================================================================================================== 00:13:44.443 [2024-12-09T22:55:19.379Z] Total : 6666.10 26.04 0.00 0.00 19139.28 4900.95 45818.21 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 231622 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 231626 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:44.704 rmmod nvme_tcp 00:13:44.704 rmmod nvme_fabrics 00:13:44.704 rmmod nvme_keyring 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 231438 ']' 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 231438 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 231438 ']' 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 231438 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 231438 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 231438' 00:13:44.704 killing process with pid 231438 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 231438 00:13:44.704 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 231438 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.965 23:55:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.876 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:46.876 00:13:46.876 real 0m10.849s 00:13:46.876 user 0m16.124s 00:13:46.876 sys 0m6.123s 00:13:46.876 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.876 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.876 ************************************ 00:13:46.876 END TEST nvmf_bdev_io_wait 00:13:46.876 ************************************ 00:13:46.876 23:55:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:46.876 23:55:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:46.876 23:55:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.876 23:55:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:47.137 ************************************ 00:13:47.137 START TEST nvmf_queue_depth 00:13:47.137 ************************************ 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:47.137 * Looking for test storage... 00:13:47.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:13:47.137 23:55:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:13:47.137 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:47.137 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:13:47.137 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:13:47.137 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:47.137 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:47.137 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:13:47.137 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:47.137 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:47.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.137 --rc genhtml_branch_coverage=1 00:13:47.137 --rc genhtml_function_coverage=1 00:13:47.137 --rc genhtml_legend=1 00:13:47.137 --rc geninfo_all_blocks=1 00:13:47.137 --rc geninfo_unexecuted_blocks=1 00:13:47.137 00:13:47.137 ' 00:13:47.137 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:47.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.137 --rc genhtml_branch_coverage=1 00:13:47.137 --rc genhtml_function_coverage=1 00:13:47.137 --rc genhtml_legend=1 00:13:47.137 --rc geninfo_all_blocks=1 00:13:47.137 --rc geninfo_unexecuted_blocks=1 00:13:47.137 00:13:47.137 ' 00:13:47.137 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:47.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.137 --rc genhtml_branch_coverage=1 00:13:47.137 --rc genhtml_function_coverage=1 00:13:47.137 --rc genhtml_legend=1 00:13:47.137 --rc geninfo_all_blocks=1 00:13:47.137 --rc geninfo_unexecuted_blocks=1 00:13:47.137 00:13:47.137 ' 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:47.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:47.138 --rc genhtml_branch_coverage=1 00:13:47.138 --rc genhtml_function_coverage=1 00:13:47.138 --rc genhtml_legend=1 00:13:47.138 --rc geninfo_all_blocks=1 00:13:47.138 --rc geninfo_unexecuted_blocks=1 00:13:47.138 00:13:47.138 ' 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:47.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:13:47.138 23:55:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:53.718 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:53.718 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:53.718 Found net devices under 0000:86:00.0: cvl_0_0 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:53.718 Found net devices under 0000:86:00.1: cvl_0_1 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:53.718 23:55:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:53.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:13:53.718 00:13:53.718 --- 10.0.0.2 ping statistics --- 00:13:53.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.719 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:13:53.719 00:13:53.719 --- 10.0.0.1 ping statistics --- 00:13:53.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.719 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=235469 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 235469 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 235469 ']' 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.719 [2024-12-09 23:55:28.100895] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:13:53.719 [2024-12-09 23:55:28.100939] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.719 [2024-12-09 23:55:28.182584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.719 [2024-12-09 23:55:28.220776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.719 [2024-12-09 23:55:28.220810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.719 [2024-12-09 23:55:28.220817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.719 [2024-12-09 23:55:28.220823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.719 [2024-12-09 23:55:28.220828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.719 [2024-12-09 23:55:28.221371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.719 [2024-12-09 23:55:28.369496] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.719 Malloc0 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.719 [2024-12-09 23:55:28.419914] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=235497 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 235497 /var/tmp/bdevperf.sock 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 235497 ']' 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.719 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.719 [2024-12-09 23:55:28.472901] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:13:53.719 [2024-12-09 23:55:28.472941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid235497 ] 00:13:53.719 [2024-12-09 23:55:28.549011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.719 [2024-12-09 23:55:28.589200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.978 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.978 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:53.978 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:53.978 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.978 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:53.978 NVMe0n1 00:13:53.978 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.978 23:55:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:54.236 Running I/O for 10 seconds... 00:13:56.107 11348.00 IOPS, 44.33 MiB/s [2024-12-09T22:55:32.418Z] 11768.50 IOPS, 45.97 MiB/s [2024-12-09T22:55:32.985Z] 11940.67 IOPS, 46.64 MiB/s [2024-12-09T22:55:34.360Z] 12016.00 IOPS, 46.94 MiB/s [2024-12-09T22:55:35.297Z] 12043.20 IOPS, 47.04 MiB/s [2024-12-09T22:55:36.233Z] 12101.33 IOPS, 47.27 MiB/s [2024-12-09T22:55:37.167Z] 12128.86 IOPS, 47.38 MiB/s [2024-12-09T22:55:38.102Z] 12148.50 IOPS, 47.46 MiB/s [2024-12-09T22:55:39.038Z] 12183.67 IOPS, 47.59 MiB/s [2024-12-09T22:55:39.296Z] 12222.70 IOPS, 47.74 MiB/s 00:14:04.360 Latency(us) 00:14:04.360 [2024-12-09T22:55:39.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.360 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:04.360 Verification LBA range: start 0x0 length 0x4000 00:14:04.360 NVMe0n1 : 10.06 12252.56 47.86 0.00 0.00 83261.87 19831.76 52656.75 00:14:04.360 [2024-12-09T22:55:39.296Z] =================================================================================================================== 00:14:04.360 [2024-12-09T22:55:39.296Z] Total : 12252.56 47.86 0.00 0.00 83261.87 19831.76 52656.75 00:14:04.360 { 00:14:04.360 "results": [ 00:14:04.360 { 00:14:04.360 "job": "NVMe0n1", 00:14:04.360 "core_mask": "0x1", 00:14:04.360 "workload": "verify", 00:14:04.360 "status": "finished", 00:14:04.360 "verify_range": { 00:14:04.360 "start": 0, 00:14:04.360 "length": 16384 00:14:04.360 }, 00:14:04.360 "queue_depth": 1024, 00:14:04.360 "io_size": 4096, 00:14:04.360 "runtime": 10.0592, 00:14:04.360 "iops": 12252.564816287577, 00:14:04.360 "mibps": 47.86158131362335, 00:14:04.360 "io_failed": 0, 00:14:04.360 "io_timeout": 0, 00:14:04.360 "avg_latency_us": 83261.87486459057, 00:14:04.360 "min_latency_us": 19831.76347826087, 00:14:04.360 "max_latency_us": 52656.751304347825 00:14:04.360 } 00:14:04.360 ], 00:14:04.360 "core_count": 1 00:14:04.360 } 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 235497 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 235497 ']' 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 235497 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235497 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235497' 00:14:04.360 killing process with pid 235497 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 235497 00:14:04.360 Received shutdown signal, test time was about 10.000000 seconds 00:14:04.360 00:14:04.360 Latency(us) 00:14:04.360 [2024-12-09T22:55:39.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.360 [2024-12-09T22:55:39.296Z] =================================================================================================================== 00:14:04.360 [2024-12-09T22:55:39.296Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 235497 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:04.360 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:14:04.619 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:04.619 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:14:04.619 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:04.619 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:04.619 rmmod nvme_tcp 00:14:04.619 rmmod nvme_fabrics 00:14:04.619 rmmod nvme_keyring 00:14:04.619 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:04.619 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:14:04.619 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:14:04.619 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 235469 ']' 00:14:04.619 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 235469 00:14:04.619 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 235469 ']' 00:14:04.619 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 235469 00:14:04.620 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:14:04.620 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.620 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235469 00:14:04.620 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:04.620 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:04.620 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235469' 00:14:04.620 killing process with pid 235469 00:14:04.620 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 235469 00:14:04.620 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 235469 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.879 23:55:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.787 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:06.787 00:14:06.787 real 0m19.841s 00:14:06.787 user 0m23.272s 00:14:06.787 sys 0m6.019s 00:14:06.787 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.787 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:06.787 ************************************ 00:14:06.787 END TEST nvmf_queue_depth 00:14:06.787 ************************************ 00:14:06.787 23:55:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:06.787 23:55:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:06.787 23:55:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.787 23:55:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:07.048 ************************************ 00:14:07.048 START TEST nvmf_target_multipath 00:14:07.048 ************************************ 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:07.048 * Looking for test storage... 00:14:07.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:07.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.048 --rc genhtml_branch_coverage=1 00:14:07.048 --rc genhtml_function_coverage=1 00:14:07.048 --rc genhtml_legend=1 00:14:07.048 --rc geninfo_all_blocks=1 00:14:07.048 --rc geninfo_unexecuted_blocks=1 00:14:07.048 00:14:07.048 ' 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:07.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.048 --rc genhtml_branch_coverage=1 00:14:07.048 --rc genhtml_function_coverage=1 00:14:07.048 --rc genhtml_legend=1 00:14:07.048 --rc geninfo_all_blocks=1 00:14:07.048 --rc geninfo_unexecuted_blocks=1 00:14:07.048 00:14:07.048 ' 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:07.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.048 --rc genhtml_branch_coverage=1 00:14:07.048 --rc genhtml_function_coverage=1 00:14:07.048 --rc genhtml_legend=1 00:14:07.048 --rc geninfo_all_blocks=1 00:14:07.048 --rc geninfo_unexecuted_blocks=1 00:14:07.048 00:14:07.048 ' 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:07.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.048 --rc genhtml_branch_coverage=1 00:14:07.048 --rc genhtml_function_coverage=1 00:14:07.048 --rc genhtml_legend=1 00:14:07.048 --rc geninfo_all_blocks=1 00:14:07.048 --rc geninfo_unexecuted_blocks=1 00:14:07.048 00:14:07.048 ' 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.048 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:14:07.049 23:55:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:13.625 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:13.626 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:13.626 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:13.626 Found net devices under 0000:86:00.0: cvl_0_0 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:13.626 Found net devices under 0000:86:00.1: cvl_0_1 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:13.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:14:13.626 00:14:13.626 --- 10.0.0.2 ping statistics --- 00:14:13.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.626 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:14:13.626 00:14:13.626 --- 10.0.0.1 ping statistics --- 00:14:13.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.626 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:13.626 only one NIC for nvmf test 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:13.626 rmmod nvme_tcp 00:14:13.626 rmmod nvme_fabrics 00:14:13.626 rmmod nvme_keyring 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.626 23:55:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.534 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:15.534 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:15.534 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:15.534 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:15.534 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:14:15.534 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:15.534 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:14:15.534 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:15.534 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:15.534 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:15.534 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:15.535 00:14:15.535 real 0m8.315s 00:14:15.535 user 0m1.817s 00:14:15.535 sys 0m4.521s 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:15.535 ************************************ 00:14:15.535 END TEST nvmf_target_multipath 00:14:15.535 ************************************ 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:15.535 ************************************ 00:14:15.535 START TEST nvmf_zcopy 00:14:15.535 ************************************ 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:15.535 * Looking for test storage... 00:14:15.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:15.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.535 --rc genhtml_branch_coverage=1 00:14:15.535 --rc genhtml_function_coverage=1 00:14:15.535 --rc genhtml_legend=1 00:14:15.535 --rc geninfo_all_blocks=1 00:14:15.535 --rc geninfo_unexecuted_blocks=1 00:14:15.535 00:14:15.535 ' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:15.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.535 --rc genhtml_branch_coverage=1 00:14:15.535 --rc genhtml_function_coverage=1 00:14:15.535 --rc genhtml_legend=1 00:14:15.535 --rc geninfo_all_blocks=1 00:14:15.535 --rc geninfo_unexecuted_blocks=1 00:14:15.535 00:14:15.535 ' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:15.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.535 --rc genhtml_branch_coverage=1 00:14:15.535 --rc genhtml_function_coverage=1 00:14:15.535 --rc genhtml_legend=1 00:14:15.535 --rc geninfo_all_blocks=1 00:14:15.535 --rc geninfo_unexecuted_blocks=1 00:14:15.535 00:14:15.535 ' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:15.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.535 --rc genhtml_branch_coverage=1 00:14:15.535 --rc genhtml_function_coverage=1 00:14:15.535 --rc genhtml_legend=1 00:14:15.535 --rc geninfo_all_blocks=1 00:14:15.535 --rc geninfo_unexecuted_blocks=1 00:14:15.535 00:14:15.535 ' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.535 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:15.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:14:15.536 23:55:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:14:22.110 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:22.111 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:22.111 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:22.111 Found net devices under 0000:86:00.0: cvl_0_0 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:22.111 Found net devices under 0000:86:00.1: cvl_0_1 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:22.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.442 ms 00:14:22.111 00:14:22.111 --- 10.0.0.2 ping statistics --- 00:14:22.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.111 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:22.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:14:22.111 00:14:22.111 --- 10.0.0.1 ping statistics --- 00:14:22.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.111 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=244400 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 244400 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 244400 ']' 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.111 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.112 [2024-12-09 23:55:56.367242] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:14:22.112 [2024-12-09 23:55:56.367293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.112 [2024-12-09 23:55:56.447316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.112 [2024-12-09 23:55:56.487439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.112 [2024-12-09 23:55:56.487477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.112 [2024-12-09 23:55:56.487484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.112 [2024-12-09 23:55:56.487490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.112 [2024-12-09 23:55:56.487496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.112 [2024-12-09 23:55:56.488038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.112 [2024-12-09 23:55:56.624416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.112 [2024-12-09 23:55:56.644609] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.112 malloc0 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:22.112 { 00:14:22.112 "params": { 00:14:22.112 "name": "Nvme$subsystem", 00:14:22.112 "trtype": "$TEST_TRANSPORT", 00:14:22.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:22.112 "adrfam": "ipv4", 00:14:22.112 "trsvcid": "$NVMF_PORT", 00:14:22.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:22.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:22.112 "hdgst": ${hdgst:-false}, 00:14:22.112 "ddgst": ${ddgst:-false} 00:14:22.112 }, 00:14:22.112 "method": "bdev_nvme_attach_controller" 00:14:22.112 } 00:14:22.112 EOF 00:14:22.112 )") 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:14:22.112 23:55:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:22.112 "params": { 00:14:22.112 "name": "Nvme1", 00:14:22.112 "trtype": "tcp", 00:14:22.112 "traddr": "10.0.0.2", 00:14:22.112 "adrfam": "ipv4", 00:14:22.112 "trsvcid": "4420", 00:14:22.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:22.112 "hdgst": false, 00:14:22.112 "ddgst": false 00:14:22.112 }, 00:14:22.112 "method": "bdev_nvme_attach_controller" 00:14:22.112 }' 00:14:22.112 [2024-12-09 23:55:56.727772] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:14:22.112 [2024-12-09 23:55:56.727812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid244453 ] 00:14:22.112 [2024-12-09 23:55:56.802320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.112 [2024-12-09 23:55:56.842608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.112 Running I/O for 10 seconds... 00:14:24.422 8540.00 IOPS, 66.72 MiB/s [2024-12-09T22:56:00.294Z] 8589.00 IOPS, 67.10 MiB/s [2024-12-09T22:56:01.230Z] 8577.33 IOPS, 67.01 MiB/s [2024-12-09T22:56:02.165Z] 8572.75 IOPS, 66.97 MiB/s [2024-12-09T22:56:03.099Z] 8589.40 IOPS, 67.10 MiB/s [2024-12-09T22:56:04.475Z] 8598.50 IOPS, 67.18 MiB/s [2024-12-09T22:56:05.409Z] 8602.29 IOPS, 67.21 MiB/s [2024-12-09T22:56:06.345Z] 8619.12 IOPS, 67.34 MiB/s [2024-12-09T22:56:07.281Z] 8626.33 IOPS, 67.39 MiB/s [2024-12-09T22:56:07.281Z] 8630.30 IOPS, 67.42 MiB/s 00:14:32.345 Latency(us) 00:14:32.345 [2024-12-09T22:56:07.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.345 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:32.345 Verification LBA range: start 0x0 length 0x1000 00:14:32.345 Nvme1n1 : 10.01 8633.29 67.45 0.00 0.00 14783.33 2407.74 23251.03 00:14:32.345 [2024-12-09T22:56:07.281Z] =================================================================================================================== 00:14:32.345 [2024-12-09T22:56:07.281Z] Total : 8633.29 67.45 0.00 0.00 14783.33 2407.74 23251.03 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=246256 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:32.345 { 00:14:32.345 "params": { 00:14:32.345 "name": "Nvme$subsystem", 00:14:32.345 "trtype": "$TEST_TRANSPORT", 00:14:32.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:32.345 "adrfam": "ipv4", 00:14:32.345 "trsvcid": "$NVMF_PORT", 00:14:32.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:32.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:32.345 "hdgst": ${hdgst:-false}, 00:14:32.345 "ddgst": ${ddgst:-false} 00:14:32.345 }, 00:14:32.345 "method": "bdev_nvme_attach_controller" 00:14:32.345 } 00:14:32.345 EOF 00:14:32.345 )") 00:14:32.345 [2024-12-09 23:56:07.244020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.345 [2024-12-09 23:56:07.244055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:14:32.345 23:56:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:32.345 "params": { 00:14:32.345 "name": "Nvme1", 00:14:32.345 "trtype": "tcp", 00:14:32.345 "traddr": "10.0.0.2", 00:14:32.345 "adrfam": "ipv4", 00:14:32.345 "trsvcid": "4420", 00:14:32.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:32.345 "hdgst": false, 00:14:32.345 "ddgst": false 00:14:32.345 }, 00:14:32.345 "method": "bdev_nvme_attach_controller" 00:14:32.345 }' 00:14:32.345 [2024-12-09 23:56:07.256015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.345 [2024-12-09 23:56:07.256029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.345 [2024-12-09 23:56:07.268044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.345 [2024-12-09 23:56:07.268053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.345 [2024-12-09 23:56:07.280105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.345 [2024-12-09 23:56:07.280126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.285636] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:14:32.604 [2024-12-09 23:56:07.285674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid246256 ] 00:14:32.604 [2024-12-09 23:56:07.292114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.292130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.304141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.304151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.316178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.316188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.328204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.328213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.340233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.340242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.352264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.352273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.362361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.604 [2024-12-09 23:56:07.364297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.364306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.376335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.376351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.388363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.388372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.400401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.400416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.402721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.604 [2024-12-09 23:56:07.412433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.412446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.424471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.424491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.436500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.436515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.448530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.448546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.604 [2024-12-09 23:56:07.460564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.604 [2024-12-09 23:56:07.460578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.605 [2024-12-09 23:56:07.472593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.605 [2024-12-09 23:56:07.472605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.605 [2024-12-09 23:56:07.484621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.605 [2024-12-09 23:56:07.484631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.605 [2024-12-09 23:56:07.496666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.605 [2024-12-09 23:56:07.496685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.605 [2024-12-09 23:56:07.508697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.605 [2024-12-09 23:56:07.508712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.605 [2024-12-09 23:56:07.520728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.605 [2024-12-09 23:56:07.520743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.605 [2024-12-09 23:56:07.532755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.605 [2024-12-09 23:56:07.532768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.544798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.863 [2024-12-09 23:56:07.544816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.556826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.863 [2024-12-09 23:56:07.556836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.568856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.863 [2024-12-09 23:56:07.568867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.580890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.863 [2024-12-09 23:56:07.580902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.592920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.863 [2024-12-09 23:56:07.592929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.604950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.863 [2024-12-09 23:56:07.604959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.616993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.863 [2024-12-09 23:56:07.617006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.629021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.863 [2024-12-09 23:56:07.629030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.641054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.863 [2024-12-09 23:56:07.641063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.653087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.863 [2024-12-09 23:56:07.653096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.665119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.863 [2024-12-09 23:56:07.665130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.863 [2024-12-09 23:56:07.677177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.864 [2024-12-09 23:56:07.677194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.864 Running I/O for 5 seconds... 00:14:32.864 [2024-12-09 23:56:07.691285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.864 [2024-12-09 23:56:07.691305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.864 [2024-12-09 23:56:07.700512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.864 [2024-12-09 23:56:07.700531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.864 [2024-12-09 23:56:07.709221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.864 [2024-12-09 23:56:07.709244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.864 [2024-12-09 23:56:07.723706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.864 [2024-12-09 23:56:07.723725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.864 [2024-12-09 23:56:07.732829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.864 [2024-12-09 23:56:07.732847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.864 [2024-12-09 23:56:07.747279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.864 [2024-12-09 23:56:07.747298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.864 [2024-12-09 23:56:07.761471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.864 [2024-12-09 23:56:07.761490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.864 [2024-12-09 23:56:07.775637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.864 [2024-12-09 23:56:07.775657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.864 [2024-12-09 23:56:07.789905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.864 [2024-12-09 23:56:07.789928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.122 [2024-12-09 23:56:07.804219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.122 [2024-12-09 23:56:07.804240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.122 [2024-12-09 23:56:07.815272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.122 [2024-12-09 23:56:07.815291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.122 [2024-12-09 23:56:07.824882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.122 [2024-12-09 23:56:07.824901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.122 [2024-12-09 23:56:07.839422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.839441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.853085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.853104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.862082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.862101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.871106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.871124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.880445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.880463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.889195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.889213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.903762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.903780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.917330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.917348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.931153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.931181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.945027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.945045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.958929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.958950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.972759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.972783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:07.986605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:07.986624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:08.000555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:08.000574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:08.014214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:08.014234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:08.028039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:08.028058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:08.041553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:08.041572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.123 [2024-12-09 23:56:08.050693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.123 [2024-12-09 23:56:08.050712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.065674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.065695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.076718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.076737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.091253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.091272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.104989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.105008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.118639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.118658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.132664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.132682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.142096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.142115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.156670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.156689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.170635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.170655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.184705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.184724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.193759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.193777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.203357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.381 [2024-12-09 23:56:08.203376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.381 [2024-12-09 23:56:08.212797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.382 [2024-12-09 23:56:08.212819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.382 [2024-12-09 23:56:08.227330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.382 [2024-12-09 23:56:08.227348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.382 [2024-12-09 23:56:08.241501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.382 [2024-12-09 23:56:08.241520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.382 [2024-12-09 23:56:08.252484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.382 [2024-12-09 23:56:08.252502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.382 [2024-12-09 23:56:08.262144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.382 [2024-12-09 23:56:08.262170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.382 [2024-12-09 23:56:08.276913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.382 [2024-12-09 23:56:08.276932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.382 [2024-12-09 23:56:08.287379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.382 [2024-12-09 23:56:08.287397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.382 [2024-12-09 23:56:08.301460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.382 [2024-12-09 23:56:08.301479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.382 [2024-12-09 23:56:08.315566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.382 [2024-12-09 23:56:08.315587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.640 [2024-12-09 23:56:08.329711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.640 [2024-12-09 23:56:08.329731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.640 [2024-12-09 23:56:08.343471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.640 [2024-12-09 23:56:08.343490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.640 [2024-12-09 23:56:08.357126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.640 [2024-12-09 23:56:08.357150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.640 [2024-12-09 23:56:08.371223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.640 [2024-12-09 23:56:08.371241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.640 [2024-12-09 23:56:08.384697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.640 [2024-12-09 23:56:08.384715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.640 [2024-12-09 23:56:08.398737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.640 [2024-12-09 23:56:08.398755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.412742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.412760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.426527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.426545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.440838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.440856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.456297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.456316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.470544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.470569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.484451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.484469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.498503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.498521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.512397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.512415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.526250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.526269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.539790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.539808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.553614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.553632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.641 [2024-12-09 23:56:08.567359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.641 [2024-12-09 23:56:08.567377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.899 [2024-12-09 23:56:08.581640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.899 [2024-12-09 23:56:08.581663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.899 [2024-12-09 23:56:08.592214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.899 [2024-12-09 23:56:08.592232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.899 [2024-12-09 23:56:08.606482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.606500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.615610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.615628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.630018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.630037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.644048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.644066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.653034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.653051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.667612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.667631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.682281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.682299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 16659.00 IOPS, 130.15 MiB/s [2024-12-09T22:56:08.836Z] [2024-12-09 23:56:08.696881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.696900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.711082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.711100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.724875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.724893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.734077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.734096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.748122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.748139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.761923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.761941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.775521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.775540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.789480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.789498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.803411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.803429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.817512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.817530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.900 [2024-12-09 23:56:08.831451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.900 [2024-12-09 23:56:08.831471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.845778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.158 [2024-12-09 23:56:08.845798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.859990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.158 [2024-12-09 23:56:08.860009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.873699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.158 [2024-12-09 23:56:08.873718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.887048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.158 [2024-12-09 23:56:08.887066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.900874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.158 [2024-12-09 23:56:08.900893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.914798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.158 [2024-12-09 23:56:08.914816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.928469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.158 [2024-12-09 23:56:08.928488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.942180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.158 [2024-12-09 23:56:08.942198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.956264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.158 [2024-12-09 23:56:08.956283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.969907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.158 [2024-12-09 23:56:08.969925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.978981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.158 [2024-12-09 23:56:08.978998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.158 [2024-12-09 23:56:08.988855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.159 [2024-12-09 23:56:08.988873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.159 [2024-12-09 23:56:08.998336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.159 [2024-12-09 23:56:08.998353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.159 [2024-12-09 23:56:09.012359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.159 [2024-12-09 23:56:09.012377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.159 [2024-12-09 23:56:09.025881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.159 [2024-12-09 23:56:09.025900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.159 [2024-12-09 23:56:09.035364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.159 [2024-12-09 23:56:09.035382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.159 [2024-12-09 23:56:09.044866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.159 [2024-12-09 23:56:09.044884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.159 [2024-12-09 23:56:09.054219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.159 [2024-12-09 23:56:09.054238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.159 [2024-12-09 23:56:09.063531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.159 [2024-12-09 23:56:09.063549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.159 [2024-12-09 23:56:09.078125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.159 [2024-12-09 23:56:09.078144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.159 [2024-12-09 23:56:09.091856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.159 [2024-12-09 23:56:09.091875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.106092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.106113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.114778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.114796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.129660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.129679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.140214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.140233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.154636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.154656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.168574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.168593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.178115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.178135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.192550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.192569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.205923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.205941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.219428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.219446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.228214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.228232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.237535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.237552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.247246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.247264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.261723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.261745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.275700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.275723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.284718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.284737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.294187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.294206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.309094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.309112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.324498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.324517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.333687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.333705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.418 [2024-12-09 23:56:09.348732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.418 [2024-12-09 23:56:09.348753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.363424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.363445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.377550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.377569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.391780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.391800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.405304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.405324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.414076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.414095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.428427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.428446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.441897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.441916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.455451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.455469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.469492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.469511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.483485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.483504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.497092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.497113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.510796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.510815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.524638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.524657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.533521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.533539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.542841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.542858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.552631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.552649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.561568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.561586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.575944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.575962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.589687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.589705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.677 [2024-12-09 23:56:09.603733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.677 [2024-12-09 23:56:09.603750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.617511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.617530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.631355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.631373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.645748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.645766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.661041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.661060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.675432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.675454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.686425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.686443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 16726.50 IOPS, 130.68 MiB/s [2024-12-09T22:56:09.872Z] [2024-12-09 23:56:09.695948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.695966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.710572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.710591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.723721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.723740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.738107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.738126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.747051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.747069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.761449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.761467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.775212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.775230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.789156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.789180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.802798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.936 [2024-12-09 23:56:09.802817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.936 [2024-12-09 23:56:09.816657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.937 [2024-12-09 23:56:09.816676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.937 [2024-12-09 23:56:09.830470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.937 [2024-12-09 23:56:09.830488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.937 [2024-12-09 23:56:09.844413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.937 [2024-12-09 23:56:09.844432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.937 [2024-12-09 23:56:09.853549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.937 [2024-12-09 23:56:09.853567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.937 [2024-12-09 23:56:09.862862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.937 [2024-12-09 23:56:09.862880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:09.877490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:09.877510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:09.891320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:09.891338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:09.904810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:09.904828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:09.918995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:09.919017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:09.932863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:09.932882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:09.946673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:09.946692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:09.960457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:09.960475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:09.974618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:09.974636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:09.985462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:09.985480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:09.994784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:09.994801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:10.004192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:10.004211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:10.019376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:10.019395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:10.034724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:10.034744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:10.048961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:10.048980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:10.062321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:10.062342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:10.077611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:10.077630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:10.093029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:10.093048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:10.107396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.195 [2024-12-09 23:56:10.107416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.195 [2024-12-09 23:56:10.121574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.196 [2024-12-09 23:56:10.121593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.135868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.135889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.146735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.146754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.161109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.161128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.175618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.175641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.190742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.190761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.204964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.204984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.218924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.218945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.233091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.233109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.247311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.247330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.261339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.261357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.275277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.275296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.284592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.284610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.298925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.298944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.313148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.313173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.324154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.324179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.338881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.338901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.349402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.349420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.358911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.358929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.373323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.373341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.455 [2024-12-09 23:56:10.387519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.455 [2024-12-09 23:56:10.387540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.714 [2024-12-09 23:56:10.403214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.714 [2024-12-09 23:56:10.403234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.714 [2024-12-09 23:56:10.418221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.714 [2024-12-09 23:56:10.418239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.714 [2024-12-09 23:56:10.433388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.714 [2024-12-09 23:56:10.433407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.714 [2024-12-09 23:56:10.447752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.714 [2024-12-09 23:56:10.447771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.714 [2024-12-09 23:56:10.461846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.714 [2024-12-09 23:56:10.461864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.714 [2024-12-09 23:56:10.472536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.714 [2024-12-09 23:56:10.472554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.714 [2024-12-09 23:56:10.486750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.486768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.715 [2024-12-09 23:56:10.500292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.500311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.715 [2024-12-09 23:56:10.514262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.514280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.715 [2024-12-09 23:56:10.528258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.528278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.715 [2024-12-09 23:56:10.541832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.541852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.715 [2024-12-09 23:56:10.555856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.555876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.715 [2024-12-09 23:56:10.569707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.569727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.715 [2024-12-09 23:56:10.584036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.584056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.715 [2024-12-09 23:56:10.595240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.595258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.715 [2024-12-09 23:56:10.609101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.609119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.715 [2024-12-09 23:56:10.623399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.623417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.715 [2024-12-09 23:56:10.637299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.715 [2024-12-09 23:56:10.637317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.652108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.652128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.662865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.662884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.676871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.676890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.691066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.691085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 16696.00 IOPS, 130.44 MiB/s [2024-12-09T22:56:10.910Z] [2024-12-09 23:56:10.705684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.705704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.721001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.721020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.730642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.730660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.744513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.744533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.753916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.753933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.762992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.763010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.777479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.777497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.791164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.791183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.805031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.805049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.819048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.819066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.829448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.829470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.844493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.844511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.855605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.855623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.869739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.869757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.883466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.883484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.974 [2024-12-09 23:56:10.897186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.974 [2024-12-09 23:56:10.897205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:10.911306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:10.911326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:10.925248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:10.925267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:10.938937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:10.938956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:10.953114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:10.953133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:10.967301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:10.967318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:10.981557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:10.981574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:10.995430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:10.995448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.009368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.009385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.023127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.023145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.032029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.032047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.046503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.046521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.060563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.060581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.074152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.074176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.087647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.087665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.101971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.101989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.112907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.112925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.127400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.127418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.136506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.136524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.150879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.150896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.234 [2024-12-09 23:56:11.165094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.234 [2024-12-09 23:56:11.165115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.493 [2024-12-09 23:56:11.180153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.493 [2024-12-09 23:56:11.180182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.493 [2024-12-09 23:56:11.194365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.493 [2024-12-09 23:56:11.194385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.493 [2024-12-09 23:56:11.203758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.493 [2024-12-09 23:56:11.203776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.493 [2024-12-09 23:56:11.218013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.493 [2024-12-09 23:56:11.218033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.493 [2024-12-09 23:56:11.231552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.493 [2024-12-09 23:56:11.231573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.493 [2024-12-09 23:56:11.245331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.493 [2024-12-09 23:56:11.245350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.493 [2024-12-09 23:56:11.259404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.493 [2024-12-09 23:56:11.259423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.493 [2024-12-09 23:56:11.273246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.493 [2024-12-09 23:56:11.273265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.493 [2024-12-09 23:56:11.286825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.493 [2024-12-09 23:56:11.286843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.493 [2024-12-09 23:56:11.300449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.493 [2024-12-09 23:56:11.300467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.494 [2024-12-09 23:56:11.314140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.494 [2024-12-09 23:56:11.314164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.494 [2024-12-09 23:56:11.327799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.494 [2024-12-09 23:56:11.327817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.494 [2024-12-09 23:56:11.341493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.494 [2024-12-09 23:56:11.341512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.494 [2024-12-09 23:56:11.355608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.494 [2024-12-09 23:56:11.355626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.494 [2024-12-09 23:56:11.369369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.494 [2024-12-09 23:56:11.369387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.494 [2024-12-09 23:56:11.383033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.494 [2024-12-09 23:56:11.383052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.494 [2024-12-09 23:56:11.396206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.494 [2024-12-09 23:56:11.396223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.494 [2024-12-09 23:56:11.410221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.494 [2024-12-09 23:56:11.410239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.494 [2024-12-09 23:56:11.424337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.494 [2024-12-09 23:56:11.424356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.438155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.438185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.452208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.452227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.466831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.466849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.480415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.480434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.493946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.493964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.507801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.507820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.521408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.521427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.530910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.530928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.545153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.545177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.558625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.558643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.572223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.572241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.585897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.585915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.599940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.599959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.613616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.613635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.627720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.627738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.641813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.641831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.655805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.655823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.669528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.669546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.753 [2024-12-09 23:56:11.683370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.753 [2024-12-09 23:56:11.683389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 16748.50 IOPS, 130.85 MiB/s [2024-12-09T22:56:11.948Z] [2024-12-09 23:56:11.697267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.697291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.711568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.711587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.725390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.725408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.738873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.738891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.752616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.752634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.766287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.766306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.780170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.780189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.793842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.793860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.807536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.807553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.821779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.821797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.832937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.832954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.846825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.846843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.860812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.860830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.874898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.874916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.885495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.885513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.899832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.899852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.913657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.913677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.927309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.927329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.012 [2024-12-09 23:56:11.941413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.012 [2024-12-09 23:56:11.941433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:11.955118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:11.955140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:11.969352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:11.969372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:11.982953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:11.982972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:11.996994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:11.997013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.011010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.011029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.024777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.024796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.038487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.038506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.052131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.052149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.065953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.065974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.080020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.080038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.089236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.089255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.103843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.103862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.117605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.117623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.131810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.131828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.145708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.145727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.159456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.159474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.168921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.168939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.178335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.178353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.271 [2024-12-09 23:56:12.192678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.271 [2024-12-09 23:56:12.192697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.207390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.207410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.222513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.222533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.236676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.236695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.246266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.246284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.260539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.260561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.274227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.274247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.288626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.288645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.302677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.302697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.317017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.317034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.330862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.330881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.344686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.344704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.358496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.358514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.371985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.372004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.386109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.386128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.399822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.399840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.530 [2024-12-09 23:56:12.413392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.530 [2024-12-09 23:56:12.413410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.531 [2024-12-09 23:56:12.427540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.531 [2024-12-09 23:56:12.427559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.531 [2024-12-09 23:56:12.441406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.531 [2024-12-09 23:56:12.441424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.531 [2024-12-09 23:56:12.455177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.531 [2024-12-09 23:56:12.455195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.469118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.469138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.483099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.483118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.497194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.497212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.511232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.511255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.521672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.521690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.536119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.536137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.550021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.550039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.563795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.563813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.573048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.573066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.587136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.587154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.600755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.600774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.789 [2024-12-09 23:56:12.614735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.789 [2024-12-09 23:56:12.614754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.790 [2024-12-09 23:56:12.628430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.790 [2024-12-09 23:56:12.628449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.790 [2024-12-09 23:56:12.642715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.790 [2024-12-09 23:56:12.642733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.790 [2024-12-09 23:56:12.652950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.790 [2024-12-09 23:56:12.652967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.790 [2024-12-09 23:56:12.667388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.790 [2024-12-09 23:56:12.667407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.790 [2024-12-09 23:56:12.681082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.790 [2024-12-09 23:56:12.681100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.790 [2024-12-09 23:56:12.694630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.790 [2024-12-09 23:56:12.694648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.790 16771.20 IOPS, 131.03 MiB/s 00:14:37.790 Latency(us) 00:14:37.790 [2024-12-09T22:56:12.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.790 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:37.790 Nvme1n1 : 5.01 16779.82 131.09 0.00 0.00 7622.14 3390.78 13335.15 00:14:37.790 [2024-12-09T22:56:12.726Z] =================================================================================================================== 00:14:37.790 [2024-12-09T22:56:12.726Z] Total : 16779.82 131.09 0.00 0.00 7622.14 3390.78 13335.15 00:14:37.790 [2024-12-09 23:56:12.704802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.790 [2024-12-09 23:56:12.704819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.790 [2024-12-09 23:56:12.716868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.790 [2024-12-09 23:56:12.716883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.728915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.728939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.740935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.740954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.752968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.752988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.765003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.765017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.777031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.777046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.789060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.789076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.801095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.801109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.813122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.813134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.825166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.825178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.837192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.837205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.849217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.849227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 [2024-12-09 23:56:12.861250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.048 [2024-12-09 23:56:12.861260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (246256) - No such process 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 246256 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.048 delay0 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.048 23:56:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:38.306 [2024-12-09 23:56:13.013924] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:44.869 Initializing NVMe Controllers 00:14:44.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:44.869 Initialization complete. Launching workers. 00:14:44.869 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3594 00:14:44.869 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3881, failed to submit 33 00:14:44.869 success 3705, unsuccessful 176, failed 0 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:44.869 rmmod nvme_tcp 00:14:44.869 rmmod nvme_fabrics 00:14:44.869 rmmod nvme_keyring 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 244400 ']' 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 244400 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 244400 ']' 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 244400 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 244400 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 244400' 00:14:44.869 killing process with pid 244400 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 244400 00:14:44.869 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 244400 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.129 23:56:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.669 23:56:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:47.669 00:14:47.669 real 0m31.876s 00:14:47.669 user 0m43.836s 00:14:47.669 sys 0m10.188s 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:47.669 ************************************ 00:14:47.669 END TEST nvmf_zcopy 00:14:47.669 ************************************ 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:47.669 ************************************ 00:14:47.669 START TEST nvmf_nmic 00:14:47.669 ************************************ 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:47.669 * Looking for test storage... 00:14:47.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.669 --rc genhtml_branch_coverage=1 00:14:47.669 --rc genhtml_function_coverage=1 00:14:47.669 --rc genhtml_legend=1 00:14:47.669 --rc geninfo_all_blocks=1 00:14:47.669 --rc geninfo_unexecuted_blocks=1 00:14:47.669 00:14:47.669 ' 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.669 --rc genhtml_branch_coverage=1 00:14:47.669 --rc genhtml_function_coverage=1 00:14:47.669 --rc genhtml_legend=1 00:14:47.669 --rc geninfo_all_blocks=1 00:14:47.669 --rc geninfo_unexecuted_blocks=1 00:14:47.669 00:14:47.669 ' 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.669 --rc genhtml_branch_coverage=1 00:14:47.669 --rc genhtml_function_coverage=1 00:14:47.669 --rc genhtml_legend=1 00:14:47.669 --rc geninfo_all_blocks=1 00:14:47.669 --rc geninfo_unexecuted_blocks=1 00:14:47.669 00:14:47.669 ' 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:47.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.669 --rc genhtml_branch_coverage=1 00:14:47.669 --rc genhtml_function_coverage=1 00:14:47.669 --rc genhtml_legend=1 00:14:47.669 --rc geninfo_all_blocks=1 00:14:47.669 --rc geninfo_unexecuted_blocks=1 00:14:47.669 00:14:47.669 ' 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.669 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:47.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:14:47.670 23:56:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:54.246 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:54.246 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.246 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.247 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:54.247 Found net devices under 0000:86:00.0: cvl_0_0 00:14:54.247 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.247 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.247 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.247 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:54.247 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.247 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:54.247 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.247 23:56:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:54.247 Found net devices under 0000:86:00.1: cvl_0_1 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:54.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:14:54.247 00:14:54.247 --- 10.0.0.2 ping statistics --- 00:14:54.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.247 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:14:54.247 00:14:54.247 --- 10.0.0.1 ping statistics --- 00:14:54.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.247 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=251851 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 251851 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 251851 ']' 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.247 [2024-12-09 23:56:28.354396] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:14:54.247 [2024-12-09 23:56:28.354463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.247 [2024-12-09 23:56:28.434400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.247 [2024-12-09 23:56:28.475725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.247 [2024-12-09 23:56:28.475763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.247 [2024-12-09 23:56:28.475771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.247 [2024-12-09 23:56:28.475778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.247 [2024-12-09 23:56:28.475783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.247 [2024-12-09 23:56:28.477196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.247 [2024-12-09 23:56:28.477246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.247 [2024-12-09 23:56:28.477354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.247 [2024-12-09 23:56:28.477355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.247 [2024-12-09 23:56:28.622634] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.247 Malloc0 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.247 [2024-12-09 23:56:28.693355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.247 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:54.248 test case1: single bdev can't be used in multiple subsystems 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.248 [2024-12-09 23:56:28.721264] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:54.248 [2024-12-09 23:56:28.721285] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:54.248 [2024-12-09 23:56:28.721293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:54.248 request: 00:14:54.248 { 00:14:54.248 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:54.248 "namespace": { 00:14:54.248 "bdev_name": "Malloc0", 00:14:54.248 "no_auto_visible": false, 00:14:54.248 "hide_metadata": false 00:14:54.248 }, 00:14:54.248 "method": "nvmf_subsystem_add_ns", 00:14:54.248 "req_id": 1 00:14:54.248 } 00:14:54.248 Got JSON-RPC error response 00:14:54.248 response: 00:14:54.248 { 00:14:54.248 "code": -32602, 00:14:54.248 "message": "Invalid parameters" 00:14:54.248 } 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:54.248 Adding namespace failed - expected result. 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:54.248 test case2: host connect to nvmf target in multiple paths 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:54.248 [2024-12-09 23:56:28.733410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.248 23:56:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:55.183 23:56:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:56.561 23:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:56.561 23:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:14:56.561 23:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.561 23:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:56.561 23:56:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:14:58.466 23:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:58.466 23:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:58.466 23:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.466 23:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:58.466 23:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.466 23:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:14:58.466 23:56:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:58.466 [global] 00:14:58.466 thread=1 00:14:58.466 invalidate=1 00:14:58.466 rw=write 00:14:58.466 time_based=1 00:14:58.466 runtime=1 00:14:58.466 ioengine=libaio 00:14:58.466 direct=1 00:14:58.466 bs=4096 00:14:58.466 iodepth=1 00:14:58.466 norandommap=0 00:14:58.466 numjobs=1 00:14:58.466 00:14:58.466 verify_dump=1 00:14:58.466 verify_backlog=512 00:14:58.466 verify_state_save=0 00:14:58.466 do_verify=1 00:14:58.466 verify=crc32c-intel 00:14:58.466 [job0] 00:14:58.466 filename=/dev/nvme0n1 00:14:58.466 Could not set queue depth (nvme0n1) 00:14:59.032 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:59.032 fio-3.35 00:14:59.032 Starting 1 thread 00:14:59.969 00:14:59.969 job0: (groupid=0, jobs=1): err= 0: pid=252926: Mon Dec 9 23:56:34 2024 00:14:59.969 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:14:59.969 slat (nsec): min=6280, max=26764, avg=7227.83, stdev=1021.80 00:14:59.969 clat (usec): min=165, max=439, avg=220.76, stdev=11.68 00:14:59.969 lat (usec): min=172, max=451, avg=227.99, stdev=11.74 00:14:59.969 clat percentiles (usec): 00:14:59.969 | 1.00th=[ 188], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:14:59.969 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 223], 00:14:59.969 | 70.00th=[ 225], 80.00th=[ 227], 90.00th=[ 233], 95.00th=[ 237], 00:14:59.969 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 289], 99.95th=[ 289], 00:14:59.969 | 99.99th=[ 441] 00:14:59.969 write: IOPS=2910, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:14:59.969 slat (nsec): min=9204, max=48154, avg=10196.69, stdev=1310.11 00:14:59.969 clat (usec): min=108, max=335, avg=128.73, stdev=10.03 00:14:59.969 lat (usec): min=124, max=384, avg=138.93, stdev=10.41 00:14:59.969 clat percentiles (usec): 00:14:59.969 | 1.00th=[ 118], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 123], 00:14:59.969 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 128], 00:14:59.969 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 151], 00:14:59.969 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 192], 99.95th=[ 202], 00:14:59.969 | 99.99th=[ 338] 00:14:59.969 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:14:59.969 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:14:59.969 lat (usec) : 250=99.03%, 500=0.97% 00:14:59.969 cpu : usr=2.50%, sys=5.20%, ctx=5473, majf=0, minf=1 00:14:59.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.969 issued rwts: total=2560,2913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.969 00:14:59.969 Run status group 0 (all jobs): 00:14:59.969 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:14:59.969 WRITE: bw=11.4MiB/s (11.9MB/s), 11.4MiB/s-11.4MiB/s (11.9MB/s-11.9MB/s), io=11.4MiB (11.9MB), run=1001-1001msec 00:14:59.969 00:14:59.969 Disk stats (read/write): 00:14:59.969 nvme0n1: ios=2383/2560, merge=0/0, ticks=605/316, in_queue=921, util=95.59% 00:14:59.969 23:56:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:00.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:00.228 rmmod nvme_tcp 00:15:00.228 rmmod nvme_fabrics 00:15:00.228 rmmod nvme_keyring 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 251851 ']' 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 251851 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 251851 ']' 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 251851 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.228 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251851 00:15:00.487 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.487 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.487 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251851' 00:15:00.487 killing process with pid 251851 00:15:00.487 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 251851 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 251851 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.488 23:56:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:03.029 00:15:03.029 real 0m15.365s 00:15:03.029 user 0m34.762s 00:15:03.029 sys 0m5.559s 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:03.029 ************************************ 00:15:03.029 END TEST nvmf_nmic 00:15:03.029 ************************************ 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:03.029 ************************************ 00:15:03.029 START TEST nvmf_fio_target 00:15:03.029 ************************************ 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:03.029 * Looking for test storage... 00:15:03.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:03.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.029 --rc genhtml_branch_coverage=1 00:15:03.029 --rc genhtml_function_coverage=1 00:15:03.029 --rc genhtml_legend=1 00:15:03.029 --rc geninfo_all_blocks=1 00:15:03.029 --rc geninfo_unexecuted_blocks=1 00:15:03.029 00:15:03.029 ' 00:15:03.029 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:03.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.030 --rc genhtml_branch_coverage=1 00:15:03.030 --rc genhtml_function_coverage=1 00:15:03.030 --rc genhtml_legend=1 00:15:03.030 --rc geninfo_all_blocks=1 00:15:03.030 --rc geninfo_unexecuted_blocks=1 00:15:03.030 00:15:03.030 ' 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:03.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.030 --rc genhtml_branch_coverage=1 00:15:03.030 --rc genhtml_function_coverage=1 00:15:03.030 --rc genhtml_legend=1 00:15:03.030 --rc geninfo_all_blocks=1 00:15:03.030 --rc geninfo_unexecuted_blocks=1 00:15:03.030 00:15:03.030 ' 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:03.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.030 --rc genhtml_branch_coverage=1 00:15:03.030 --rc genhtml_function_coverage=1 00:15:03.030 --rc genhtml_legend=1 00:15:03.030 --rc geninfo_all_blocks=1 00:15:03.030 --rc geninfo_unexecuted_blocks=1 00:15:03.030 00:15:03.030 ' 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:03.030 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.031 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:03.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:03.032 23:56:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:09.606 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:09.606 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:09.606 Found net devices under 0000:86:00.0: cvl_0_0 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:09.606 Found net devices under 0000:86:00.1: cvl_0_1 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.606 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:09.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:15:09.607 00:15:09.607 --- 10.0.0.2 ping statistics --- 00:15:09.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.607 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:15:09.607 00:15:09.607 --- 10.0.0.1 ping statistics --- 00:15:09.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.607 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=256698 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 256698 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 256698 ']' 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.607 [2024-12-09 23:56:43.761799] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:15:09.607 [2024-12-09 23:56:43.761844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.607 [2024-12-09 23:56:43.826363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.607 [2024-12-09 23:56:43.868569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.607 [2024-12-09 23:56:43.868605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.607 [2024-12-09 23:56:43.868613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.607 [2024-12-09 23:56:43.868618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.607 [2024-12-09 23:56:43.868624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.607 [2024-12-09 23:56:43.870146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.607 [2024-12-09 23:56:43.870190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.607 [2024-12-09 23:56:43.870306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.607 [2024-12-09 23:56:43.870306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:09.607 23:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.607 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.607 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:09.607 [2024-12-09 23:56:44.184232] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.607 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:09.607 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:09.607 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:09.864 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:09.864 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:10.122 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:10.122 23:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:10.379 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:10.379 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:10.379 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:10.636 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:10.636 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:10.893 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:10.893 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:11.151 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:11.151 23:56:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:11.409 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:11.409 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:11.409 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:11.667 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:11.667 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.924 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.182 [2024-12-09 23:56:46.909859] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.182 23:56:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:12.439 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:12.439 23:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:13.811 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:13.811 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:15:13.812 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:13.812 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:15:13.812 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:15:13.812 23:56:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:15:15.709 23:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:15.709 23:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:15.709 23:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.709 23:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:15:15.709 23:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.709 23:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:15:15.709 23:56:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:15.709 [global] 00:15:15.709 thread=1 00:15:15.709 invalidate=1 00:15:15.709 rw=write 00:15:15.709 time_based=1 00:15:15.709 runtime=1 00:15:15.709 ioengine=libaio 00:15:15.709 direct=1 00:15:15.709 bs=4096 00:15:15.709 iodepth=1 00:15:15.709 norandommap=0 00:15:15.709 numjobs=1 00:15:15.709 00:15:15.709 verify_dump=1 00:15:15.709 verify_backlog=512 00:15:15.709 verify_state_save=0 00:15:15.709 do_verify=1 00:15:15.709 verify=crc32c-intel 00:15:15.709 [job0] 00:15:15.709 filename=/dev/nvme0n1 00:15:15.709 [job1] 00:15:15.709 filename=/dev/nvme0n2 00:15:15.709 [job2] 00:15:15.709 filename=/dev/nvme0n3 00:15:15.709 [job3] 00:15:15.709 filename=/dev/nvme0n4 00:15:15.709 Could not set queue depth (nvme0n1) 00:15:15.709 Could not set queue depth (nvme0n2) 00:15:15.709 Could not set queue depth (nvme0n3) 00:15:15.709 Could not set queue depth (nvme0n4) 00:15:15.966 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:15.966 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:15.966 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:15.966 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:15.966 fio-3.35 00:15:15.966 Starting 4 threads 00:15:17.338 00:15:17.338 job0: (groupid=0, jobs=1): err= 0: pid=258050: Mon Dec 9 23:56:52 2024 00:15:17.338 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:15:17.338 slat (nsec): min=10023, max=25228, avg=23515.77, stdev=3037.49 00:15:17.338 clat (usec): min=40663, max=41905, avg=41005.51, stdev=224.34 00:15:17.338 lat (usec): min=40673, max=41929, avg=41029.03, stdev=225.43 00:15:17.338 clat percentiles (usec): 00:15:17.338 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:17.338 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:17.338 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:17.338 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:17.338 | 99.99th=[41681] 00:15:17.338 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:15:17.338 slat (nsec): min=10698, max=57968, avg=12611.80, stdev=2568.21 00:15:17.338 clat (usec): min=130, max=276, avg=189.25, stdev=21.03 00:15:17.338 lat (usec): min=141, max=332, avg=201.86, stdev=21.58 00:15:17.338 clat percentiles (usec): 00:15:17.338 | 1.00th=[ 137], 5.00th=[ 151], 10.00th=[ 165], 20.00th=[ 176], 00:15:17.338 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 194], 00:15:17.338 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 223], 00:15:17.338 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 277], 99.95th=[ 277], 00:15:17.339 | 99.99th=[ 277] 00:15:17.339 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=1 00:15:17.339 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:17.339 lat (usec) : 250=95.51%, 500=0.37% 00:15:17.339 lat (msec) : 50=4.12% 00:15:17.339 cpu : usr=0.30%, sys=1.09%, ctx=537, majf=0, minf=1 00:15:17.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:17.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.339 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:17.339 job1: (groupid=0, jobs=1): err= 0: pid=258051: Mon Dec 9 23:56:52 2024 00:15:17.339 read: IOPS=22, BW=88.9KiB/s (91.0kB/s)(92.0KiB/1035msec) 00:15:17.339 slat (nsec): min=10570, max=23237, avg=22023.09, stdev=2509.90 00:15:17.339 clat (usec): min=40737, max=41114, avg=40960.91, stdev=75.02 00:15:17.339 lat (usec): min=40748, max=41136, avg=40982.93, stdev=76.65 00:15:17.339 clat percentiles (usec): 00:15:17.339 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:17.339 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:17.339 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:17.339 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:17.339 | 99.99th=[41157] 00:15:17.339 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:15:17.339 slat (nsec): min=10632, max=48679, avg=12180.26, stdev=2070.39 00:15:17.339 clat (usec): min=137, max=304, avg=162.40, stdev=12.50 00:15:17.339 lat (usec): min=149, max=353, avg=174.58, stdev=13.53 00:15:17.339 clat percentiles (usec): 00:15:17.339 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:15:17.339 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:15:17.339 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 176], 95.00th=[ 182], 00:15:17.339 | 99.00th=[ 192], 99.50th=[ 208], 99.90th=[ 306], 99.95th=[ 306], 00:15:17.339 | 99.99th=[ 306] 00:15:17.339 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=1 00:15:17.339 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:17.339 lat (usec) : 250=95.51%, 500=0.19% 00:15:17.339 lat (msec) : 50=4.30% 00:15:17.339 cpu : usr=0.48%, sys=0.87%, ctx=536, majf=0, minf=1 00:15:17.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:17.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.339 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:17.339 job2: (groupid=0, jobs=1): err= 0: pid=258053: Mon Dec 9 23:56:52 2024 00:15:17.339 read: IOPS=185, BW=743KiB/s (761kB/s)(744KiB/1001msec) 00:15:17.339 slat (nsec): min=7754, max=24214, avg=10185.03, stdev=4480.03 00:15:17.339 clat (usec): min=191, max=41054, avg=4829.27, stdev=12870.97 00:15:17.339 lat (usec): min=199, max=41076, avg=4839.45, stdev=12875.23 00:15:17.339 clat percentiles (usec): 00:15:17.339 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 233], 00:15:17.339 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:15:17.339 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[41157], 95.00th=[41157], 00:15:17.339 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:17.339 | 99.99th=[41157] 00:15:17.339 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:15:17.339 slat (nsec): min=11086, max=49709, avg=12631.35, stdev=2351.21 00:15:17.339 clat (usec): min=137, max=297, avg=175.82, stdev=21.39 00:15:17.339 lat (usec): min=149, max=314, avg=188.45, stdev=21.84 00:15:17.339 clat percentiles (usec): 00:15:17.339 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:15:17.339 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:15:17.339 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 202], 95.00th=[ 219], 00:15:17.339 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 297], 00:15:17.339 | 99.99th=[ 297] 00:15:17.339 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=1 00:15:17.339 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:17.339 lat (usec) : 250=86.96%, 500=10.03% 00:15:17.339 lat (msec) : 50=3.01% 00:15:17.339 cpu : usr=0.50%, sys=1.30%, ctx=699, majf=0, minf=2 00:15:17.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:17.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.339 issued rwts: total=186,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:17.339 job3: (groupid=0, jobs=1): err= 0: pid=258055: Mon Dec 9 23:56:52 2024 00:15:17.339 read: IOPS=2361, BW=9447KiB/s (9673kB/s)(9456KiB/1001msec) 00:15:17.339 slat (nsec): min=7459, max=41412, avg=8488.56, stdev=1625.61 00:15:17.339 clat (usec): min=171, max=468, avg=222.33, stdev=30.15 00:15:17.339 lat (usec): min=179, max=483, avg=230.82, stdev=30.26 00:15:17.339 clat percentiles (usec): 00:15:17.339 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 196], 00:15:17.339 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 233], 00:15:17.339 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:15:17.339 | 99.00th=[ 285], 99.50th=[ 347], 99.90th=[ 457], 99.95th=[ 465], 00:15:17.339 | 99.99th=[ 469] 00:15:17.339 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:17.339 slat (nsec): min=10933, max=48503, avg=12225.12, stdev=1859.68 00:15:17.339 clat (usec): min=120, max=361, avg=159.09, stdev=26.28 00:15:17.339 lat (usec): min=132, max=397, avg=171.32, stdev=26.61 00:15:17.339 clat percentiles (usec): 00:15:17.339 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:15:17.339 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 157], 00:15:17.339 | 70.00th=[ 167], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 204], 00:15:17.339 | 99.00th=[ 229], 99.50th=[ 255], 99.90th=[ 351], 99.95th=[ 355], 00:15:17.339 | 99.99th=[ 363] 00:15:17.339 bw ( KiB/s): min=10040, max=10040, per=63.42%, avg=10040.00, stdev= 0.00, samples=1 00:15:17.339 iops : min= 2510, max= 2510, avg=2510.00, stdev= 0.00, samples=1 00:15:17.339 lat (usec) : 250=91.27%, 500=8.73% 00:15:17.339 cpu : usr=3.70%, sys=8.30%, ctx=4925, majf=0, minf=1 00:15:17.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:17.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.340 issued rwts: total=2364,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.340 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:17.340 00:15:17.340 Run status group 0 (all jobs): 00:15:17.340 READ: bw=9.79MiB/s (10.3MB/s), 87.2KiB/s-9447KiB/s (89.3kB/s-9673kB/s), io=10.1MiB (10.6MB), run=1001-1035msec 00:15:17.340 WRITE: bw=15.5MiB/s (16.2MB/s), 1979KiB/s-9.99MiB/s (2026kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1035msec 00:15:17.340 00:15:17.340 Disk stats (read/write): 00:15:17.340 nvme0n1: ios=41/512, merge=0/0, ticks=1602/92, in_queue=1694, util=85.37% 00:15:17.340 nvme0n2: ios=41/512, merge=0/0, ticks=1642/82, in_queue=1724, util=89.43% 00:15:17.340 nvme0n3: ios=41/512, merge=0/0, ticks=1642/85, in_queue=1727, util=93.32% 00:15:17.340 nvme0n4: ios=2071/2081, merge=0/0, ticks=1333/315, in_queue=1648, util=94.11% 00:15:17.340 23:56:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:17.340 [global] 00:15:17.340 thread=1 00:15:17.340 invalidate=1 00:15:17.340 rw=randwrite 00:15:17.340 time_based=1 00:15:17.340 runtime=1 00:15:17.340 ioengine=libaio 00:15:17.340 direct=1 00:15:17.340 bs=4096 00:15:17.340 iodepth=1 00:15:17.340 norandommap=0 00:15:17.340 numjobs=1 00:15:17.340 00:15:17.340 verify_dump=1 00:15:17.340 verify_backlog=512 00:15:17.340 verify_state_save=0 00:15:17.340 do_verify=1 00:15:17.340 verify=crc32c-intel 00:15:17.340 [job0] 00:15:17.340 filename=/dev/nvme0n1 00:15:17.340 [job1] 00:15:17.340 filename=/dev/nvme0n2 00:15:17.340 [job2] 00:15:17.340 filename=/dev/nvme0n3 00:15:17.340 [job3] 00:15:17.340 filename=/dev/nvme0n4 00:15:17.340 Could not set queue depth (nvme0n1) 00:15:17.340 Could not set queue depth (nvme0n2) 00:15:17.340 Could not set queue depth (nvme0n3) 00:15:17.340 Could not set queue depth (nvme0n4) 00:15:17.598 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:17.598 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:17.598 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:17.598 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:17.598 fio-3.35 00:15:17.598 Starting 4 threads 00:15:18.971 00:15:18.971 job0: (groupid=0, jobs=1): err= 0: pid=258443: Mon Dec 9 23:56:53 2024 00:15:18.971 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:15:18.971 slat (nsec): min=9770, max=24020, avg=22542.14, stdev=3300.38 00:15:18.971 clat (usec): min=40396, max=41074, avg=40943.80, stdev=136.76 00:15:18.971 lat (usec): min=40406, max=41096, avg=40966.34, stdev=139.40 00:15:18.971 clat percentiles (usec): 00:15:18.971 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:18.971 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:18.971 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:18.971 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:18.971 | 99.99th=[41157] 00:15:18.971 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:15:18.971 slat (nsec): min=8370, max=40365, avg=11648.03, stdev=3991.68 00:15:18.971 clat (usec): min=131, max=282, avg=188.35, stdev=29.42 00:15:18.971 lat (usec): min=141, max=299, avg=200.00, stdev=29.99 00:15:18.971 clat percentiles (usec): 00:15:18.971 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 159], 00:15:18.971 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 188], 60.00th=[ 200], 00:15:18.971 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 233], 00:15:18.971 | 99.00th=[ 245], 99.50th=[ 260], 99.90th=[ 285], 99.95th=[ 285], 00:15:18.971 | 99.99th=[ 285] 00:15:18.971 bw ( KiB/s): min= 4096, max= 4096, per=17.18%, avg=4096.00, stdev= 0.00, samples=1 00:15:18.971 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:18.971 lat (usec) : 250=95.13%, 500=0.75% 00:15:18.971 lat (msec) : 50=4.12% 00:15:18.971 cpu : usr=0.20%, sys=0.60%, ctx=536, majf=0, minf=1 00:15:18.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.971 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.971 job1: (groupid=0, jobs=1): err= 0: pid=258450: Mon Dec 9 23:56:53 2024 00:15:18.971 read: IOPS=2047, BW=8192KiB/s (8388kB/s)(8200KiB/1001msec) 00:15:18.971 slat (nsec): min=7399, max=39066, avg=8510.58, stdev=1386.86 00:15:18.971 clat (usec): min=186, max=471, avg=246.80, stdev=27.60 00:15:18.971 lat (usec): min=194, max=480, avg=255.32, stdev=27.71 00:15:18.971 clat percentiles (usec): 00:15:18.971 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 231], 00:15:18.971 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:15:18.971 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 297], 00:15:18.971 | 99.00th=[ 379], 99.50th=[ 408], 99.90th=[ 461], 99.95th=[ 465], 00:15:18.971 | 99.99th=[ 474] 00:15:18.971 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:18.971 slat (nsec): min=9835, max=50309, avg=11166.95, stdev=1895.90 00:15:18.971 clat (usec): min=114, max=307, avg=169.76, stdev=40.13 00:15:18.971 lat (usec): min=124, max=341, avg=180.93, stdev=40.35 00:15:18.971 clat percentiles (usec): 00:15:18.971 | 1.00th=[ 121], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 137], 00:15:18.971 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 169], 00:15:18.971 | 70.00th=[ 186], 80.00th=[ 212], 90.00th=[ 233], 95.00th=[ 241], 00:15:18.971 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 310], 99.95th=[ 310], 00:15:18.971 | 99.99th=[ 310] 00:15:18.971 bw ( KiB/s): min=10456, max=10456, per=43.86%, avg=10456.00, stdev= 0.00, samples=1 00:15:18.971 iops : min= 2614, max= 2614, avg=2614.00, stdev= 0.00, samples=1 00:15:18.971 lat (usec) : 250=85.64%, 500=14.36% 00:15:18.971 cpu : usr=3.40%, sys=7.80%, ctx=4610, majf=0, minf=1 00:15:18.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.971 issued rwts: total=2050,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.971 job2: (groupid=0, jobs=1): err= 0: pid=258471: Mon Dec 9 23:56:53 2024 00:15:18.971 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:15:18.971 slat (nsec): min=9354, max=23898, avg=22473.59, stdev=2946.61 00:15:18.971 clat (usec): min=40461, max=41121, avg=40948.80, stdev=118.52 00:15:18.971 lat (usec): min=40470, max=41145, avg=40971.28, stdev=121.27 00:15:18.971 clat percentiles (usec): 00:15:18.971 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:18.971 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:18.971 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:18.971 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:18.971 | 99.99th=[41157] 00:15:18.971 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:15:18.971 slat (usec): min=8, max=28685, avg=68.63, stdev=1267.19 00:15:18.971 clat (usec): min=138, max=308, avg=181.88, stdev=21.35 00:15:18.971 lat (usec): min=148, max=28866, avg=250.51, stdev=1267.34 00:15:18.971 clat percentiles (usec): 00:15:18.971 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 163], 00:15:18.971 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:15:18.971 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 217], 00:15:18.971 | 99.00th=[ 229], 99.50th=[ 247], 99.90th=[ 310], 99.95th=[ 310], 00:15:18.971 | 99.99th=[ 310] 00:15:18.971 bw ( KiB/s): min= 4096, max= 4096, per=17.18%, avg=4096.00, stdev= 0.00, samples=1 00:15:18.971 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:18.971 lat (usec) : 250=95.51%, 500=0.37% 00:15:18.971 lat (msec) : 50=4.12% 00:15:18.971 cpu : usr=0.19%, sys=0.68%, ctx=536, majf=0, minf=1 00:15:18.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.972 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.972 job3: (groupid=0, jobs=1): err= 0: pid=258478: Mon Dec 9 23:56:53 2024 00:15:18.972 read: IOPS=2094, BW=8380KiB/s (8581kB/s)(8388KiB/1001msec) 00:15:18.972 slat (nsec): min=7357, max=25288, avg=8414.34, stdev=1154.96 00:15:18.972 clat (usec): min=188, max=487, avg=246.12, stdev=30.52 00:15:18.972 lat (usec): min=196, max=496, avg=254.54, stdev=30.55 00:15:18.972 clat percentiles (usec): 00:15:18.972 | 1.00th=[ 200], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 229], 00:15:18.972 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:15:18.972 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 285], 00:15:18.972 | 99.00th=[ 429], 99.50th=[ 461], 99.90th=[ 482], 99.95th=[ 486], 00:15:18.972 | 99.99th=[ 490] 00:15:18.972 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:15:18.972 slat (nsec): min=9882, max=52075, avg=11633.34, stdev=2903.06 00:15:18.972 clat (usec): min=113, max=519, avg=165.15, stdev=38.12 00:15:18.972 lat (usec): min=128, max=530, avg=176.78, stdev=39.06 00:15:18.972 clat percentiles (usec): 00:15:18.972 | 1.00th=[ 129], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:15:18.972 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 157], 00:15:18.972 | 70.00th=[ 167], 80.00th=[ 194], 90.00th=[ 235], 95.00th=[ 241], 00:15:18.972 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 338], 99.95th=[ 486], 00:15:18.972 | 99.99th=[ 519] 00:15:18.972 bw ( KiB/s): min= 9336, max= 9336, per=39.17%, avg=9336.00, stdev= 0.00, samples=1 00:15:18.972 iops : min= 2334, max= 2334, avg=2334.00, stdev= 0.00, samples=1 00:15:18.972 lat (usec) : 250=84.58%, 500=15.40%, 750=0.02% 00:15:18.972 cpu : usr=3.60%, sys=7.70%, ctx=4657, majf=0, minf=1 00:15:18.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.972 issued rwts: total=2097,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.972 00:15:18.972 Run status group 0 (all jobs): 00:15:18.972 READ: bw=15.9MiB/s (16.7MB/s), 85.4KiB/s-8380KiB/s (87.4kB/s-8581kB/s), io=16.4MiB (17.2MB), run=1001-1031msec 00:15:18.972 WRITE: bw=23.3MiB/s (24.4MB/s), 1986KiB/s-9.99MiB/s (2034kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1031msec 00:15:18.972 00:15:18.972 Disk stats (read/write): 00:15:18.972 nvme0n1: ios=68/512, merge=0/0, ticks=1213/93, in_queue=1306, util=85.67% 00:15:18.972 nvme0n2: ios=1931/2048, merge=0/0, ticks=508/309, in_queue=817, util=90.86% 00:15:18.972 nvme0n3: ios=71/512, merge=0/0, ticks=1011/92, in_queue=1103, util=94.38% 00:15:18.972 nvme0n4: ios=1925/2048, merge=0/0, ticks=516/315, in_queue=831, util=95.69% 00:15:18.972 23:56:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:18.972 [global] 00:15:18.972 thread=1 00:15:18.972 invalidate=1 00:15:18.972 rw=write 00:15:18.972 time_based=1 00:15:18.972 runtime=1 00:15:18.972 ioengine=libaio 00:15:18.972 direct=1 00:15:18.972 bs=4096 00:15:18.972 iodepth=128 00:15:18.972 norandommap=0 00:15:18.972 numjobs=1 00:15:18.972 00:15:18.972 verify_dump=1 00:15:18.972 verify_backlog=512 00:15:18.972 verify_state_save=0 00:15:18.972 do_verify=1 00:15:18.972 verify=crc32c-intel 00:15:18.972 [job0] 00:15:18.972 filename=/dev/nvme0n1 00:15:18.972 [job1] 00:15:18.972 filename=/dev/nvme0n2 00:15:18.972 [job2] 00:15:18.972 filename=/dev/nvme0n3 00:15:18.972 [job3] 00:15:18.972 filename=/dev/nvme0n4 00:15:18.972 Could not set queue depth (nvme0n1) 00:15:18.972 Could not set queue depth (nvme0n2) 00:15:18.972 Could not set queue depth (nvme0n3) 00:15:18.972 Could not set queue depth (nvme0n4) 00:15:19.230 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:19.230 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:19.230 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:19.230 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:19.230 fio-3.35 00:15:19.230 Starting 4 threads 00:15:20.606 00:15:20.606 job0: (groupid=0, jobs=1): err= 0: pid=258915: Mon Dec 9 23:56:55 2024 00:15:20.606 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:15:20.606 slat (nsec): min=1213, max=17317k, avg=128551.34, stdev=932990.26 00:15:20.606 clat (usec): min=4506, max=60892, avg=16527.35, stdev=9521.06 00:15:20.606 lat (usec): min=4512, max=60918, avg=16655.90, stdev=9614.04 00:15:20.606 clat percentiles (usec): 00:15:20.606 | 1.00th=[ 6849], 5.00th=[ 8356], 10.00th=[ 9896], 20.00th=[11207], 00:15:20.606 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13829], 60.00th=[14615], 00:15:20.606 | 70.00th=[15795], 80.00th=[17957], 90.00th=[29230], 95.00th=[43779], 00:15:20.606 | 99.00th=[48497], 99.50th=[51643], 99.90th=[55837], 99.95th=[60031], 00:15:20.606 | 99.99th=[61080] 00:15:20.606 write: IOPS=3991, BW=15.6MiB/s (16.3MB/s)(15.7MiB/1008msec); 0 zone resets 00:15:20.606 slat (nsec): min=1953, max=13232k, avg=127475.61, stdev=726673.90 00:15:20.606 clat (usec): min=1191, max=46969, avg=16975.13, stdev=8680.65 00:15:20.606 lat (usec): min=1202, max=46980, avg=17102.60, stdev=8754.84 00:15:20.606 clat percentiles (usec): 00:15:20.606 | 1.00th=[ 5538], 5.00th=[ 7308], 10.00th=[ 9110], 20.00th=[ 9896], 00:15:20.606 | 30.00th=[10552], 40.00th=[12518], 50.00th=[13698], 60.00th=[16319], 00:15:20.606 | 70.00th=[18482], 80.00th=[23200], 90.00th=[31327], 95.00th=[35390], 00:15:20.606 | 99.00th=[39584], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:15:20.606 | 99.99th=[46924] 00:15:20.606 bw ( KiB/s): min=10680, max=20480, per=23.31%, avg=15580.00, stdev=6929.65, samples=2 00:15:20.606 iops : min= 2670, max= 5120, avg=3895.00, stdev=1732.41, samples=2 00:15:20.606 lat (msec) : 2=0.04%, 4=0.18%, 10=16.72%, 20=61.09%, 50=21.61% 00:15:20.606 lat (msec) : 100=0.35% 00:15:20.606 cpu : usr=2.98%, sys=5.46%, ctx=377, majf=0, minf=1 00:15:20.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:20.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:20.606 issued rwts: total=3584,4023,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:20.606 job1: (groupid=0, jobs=1): err= 0: pid=258932: Mon Dec 9 23:56:55 2024 00:15:20.606 read: IOPS=4715, BW=18.4MiB/s (19.3MB/s)(18.4MiB/1001msec) 00:15:20.606 slat (nsec): min=1093, max=11741k, avg=95952.37, stdev=614510.97 00:15:20.606 clat (usec): min=394, max=44690, avg=12932.20, stdev=6205.78 00:15:20.606 lat (usec): min=1030, max=56386, avg=13028.15, stdev=6244.47 00:15:20.606 clat percentiles (usec): 00:15:20.606 | 1.00th=[ 5276], 5.00th=[ 7504], 10.00th=[ 8291], 20.00th=[ 9372], 00:15:20.606 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10945], 60.00th=[11469], 00:15:20.606 | 70.00th=[13173], 80.00th=[15139], 90.00th=[19006], 95.00th=[22676], 00:15:20.606 | 99.00th=[43779], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:15:20.606 | 99.99th=[44827] 00:15:20.606 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:15:20.606 slat (nsec): min=1865, max=17832k, avg=100869.80, stdev=684349.13 00:15:20.606 clat (usec): min=307, max=50489, avg=12804.21, stdev=7135.66 00:15:20.606 lat (usec): min=749, max=50517, avg=12905.08, stdev=7190.56 00:15:20.606 clat percentiles (usec): 00:15:20.606 | 1.00th=[ 4047], 5.00th=[ 7046], 10.00th=[ 8160], 20.00th=[ 9110], 00:15:20.606 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:15:20.606 | 70.00th=[11731], 80.00th=[14091], 90.00th=[23725], 95.00th=[30802], 00:15:20.606 | 99.00th=[38011], 99.50th=[43779], 99.90th=[45876], 99.95th=[45876], 00:15:20.606 | 99.99th=[50594] 00:15:20.606 bw ( KiB/s): min=21936, max=21936, per=32.82%, avg=21936.00, stdev= 0.00, samples=1 00:15:20.606 iops : min= 5484, max= 5484, avg=5484.00, stdev= 0.00, samples=1 00:15:20.606 lat (usec) : 500=0.02%, 1000=0.02% 00:15:20.606 lat (msec) : 2=0.24%, 4=0.64%, 10=30.47%, 20=58.23%, 50=10.37% 00:15:20.606 lat (msec) : 100=0.01% 00:15:20.606 cpu : usr=3.90%, sys=4.70%, ctx=518, majf=0, minf=1 00:15:20.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:20.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:20.606 issued rwts: total=4720,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:20.606 job2: (groupid=0, jobs=1): err= 0: pid=258953: Mon Dec 9 23:56:55 2024 00:15:20.606 read: IOPS=3547, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:15:20.606 slat (nsec): min=1442, max=10745k, avg=102808.18, stdev=773773.43 00:15:20.606 clat (usec): min=4567, max=43162, avg=14256.86, stdev=4459.66 00:15:20.606 lat (usec): min=4576, max=43185, avg=14359.66, stdev=4525.04 00:15:20.606 clat percentiles (usec): 00:15:20.606 | 1.00th=[ 7832], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[11076], 00:15:20.606 | 30.00th=[11600], 40.00th=[12256], 50.00th=[13435], 60.00th=[14222], 00:15:20.606 | 70.00th=[15664], 80.00th=[16319], 90.00th=[19268], 95.00th=[22938], 00:15:20.606 | 99.00th=[30278], 99.50th=[35390], 99.90th=[43254], 99.95th=[43254], 00:15:20.606 | 99.99th=[43254] 00:15:20.606 write: IOPS=4047, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1012msec); 0 zone resets 00:15:20.606 slat (usec): min=2, max=12177, avg=109.91, stdev=694.02 00:15:20.606 clat (msec): min=2, max=113, avg=18.77, stdev=17.28 00:15:20.606 lat (msec): min=2, max=113, avg=18.88, stdev=17.35 00:15:20.606 clat percentiles (msec): 00:15:20.606 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 11], 00:15:20.606 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:15:20.606 | 70.00th=[ 15], 80.00th=[ 28], 90.00th=[ 39], 95.00th=[ 55], 00:15:20.606 | 99.00th=[ 96], 99.50th=[ 107], 99.90th=[ 114], 99.95th=[ 114], 00:15:20.606 | 99.99th=[ 114] 00:15:20.606 bw ( KiB/s): min=15104, max=16688, per=23.78%, avg=15896.00, stdev=1120.06, samples=2 00:15:20.606 iops : min= 3776, max= 4172, avg=3974.00, stdev=280.01, samples=2 00:15:20.606 lat (msec) : 4=0.65%, 10=13.79%, 20=68.29%, 50=14.14%, 100=2.65% 00:15:20.606 lat (msec) : 250=0.47% 00:15:20.606 cpu : usr=3.56%, sys=5.74%, ctx=398, majf=0, minf=1 00:15:20.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:20.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:20.606 issued rwts: total=3590,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:20.606 job3: (groupid=0, jobs=1): err= 0: pid=258960: Mon Dec 9 23:56:55 2024 00:15:20.606 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:15:20.606 slat (nsec): min=1509, max=36759k, avg=140900.36, stdev=1182502.32 00:15:20.606 clat (usec): min=4134, max=74289, avg=17478.04, stdev=11659.25 00:15:20.606 lat (usec): min=4145, max=74314, avg=17618.94, stdev=11777.86 00:15:20.606 clat percentiles (usec): 00:15:20.606 | 1.00th=[ 4883], 5.00th=[ 7767], 10.00th=[ 8586], 20.00th=[ 9372], 00:15:20.606 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11600], 60.00th=[15008], 00:15:20.606 | 70.00th=[17171], 80.00th=[29230], 90.00th=[33817], 95.00th=[38536], 00:15:20.606 | 99.00th=[52691], 99.50th=[56361], 99.90th=[56361], 99.95th=[64226], 00:15:20.606 | 99.99th=[73925] 00:15:20.606 write: IOPS=3655, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1004msec); 0 zone resets 00:15:20.606 slat (usec): min=2, max=17237, avg=126.45, stdev=926.11 00:15:20.606 clat (usec): min=1804, max=63332, avg=16906.60, stdev=12442.84 00:15:20.606 lat (usec): min=3209, max=66067, avg=17033.05, stdev=12533.86 00:15:20.606 clat percentiles (usec): 00:15:20.606 | 1.00th=[ 4817], 5.00th=[ 6456], 10.00th=[ 7898], 20.00th=[ 8455], 00:15:20.606 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[10290], 60.00th=[11600], 00:15:20.606 | 70.00th=[20579], 80.00th=[27132], 90.00th=[36439], 95.00th=[39584], 00:15:20.606 | 99.00th=[63177], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:15:20.606 | 99.99th=[63177] 00:15:20.606 bw ( KiB/s): min= 8656, max=20016, per=21.45%, avg=14336.00, stdev=8032.73, samples=2 00:15:20.606 iops : min= 2164, max= 5004, avg=3584.00, stdev=2008.18, samples=2 00:15:20.606 lat (msec) : 2=0.01%, 4=0.30%, 10=41.47%, 20=29.27%, 50=26.29% 00:15:20.606 lat (msec) : 100=2.66% 00:15:20.606 cpu : usr=3.99%, sys=4.59%, ctx=239, majf=0, minf=1 00:15:20.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:15:20.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:20.606 issued rwts: total=3584,3670,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:20.606 00:15:20.606 Run status group 0 (all jobs): 00:15:20.606 READ: bw=59.7MiB/s (62.6MB/s), 13.9MiB/s-18.4MiB/s (14.5MB/s-19.3MB/s), io=60.5MiB (63.4MB), run=1001-1012msec 00:15:20.606 WRITE: bw=65.3MiB/s (68.4MB/s), 14.3MiB/s-20.0MiB/s (15.0MB/s-20.9MB/s), io=66.1MiB (69.3MB), run=1001-1012msec 00:15:20.606 00:15:20.606 Disk stats (read/write): 00:15:20.606 nvme0n1: ios=3385/3584, merge=0/0, ticks=31444/33163, in_queue=64607, util=97.49% 00:15:20.606 nvme0n2: ios=4136/4226, merge=0/0, ticks=20299/22759, in_queue=43058, util=96.95% 00:15:20.606 nvme0n3: ios=3216/3584, merge=0/0, ticks=44034/60091, in_queue=104125, util=97.39% 00:15:20.606 nvme0n4: ios=2603/3031, merge=0/0, ticks=26982/30323, in_queue=57305, util=97.37% 00:15:20.606 23:56:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:20.606 [global] 00:15:20.606 thread=1 00:15:20.606 invalidate=1 00:15:20.606 rw=randwrite 00:15:20.606 time_based=1 00:15:20.606 runtime=1 00:15:20.606 ioengine=libaio 00:15:20.606 direct=1 00:15:20.606 bs=4096 00:15:20.606 iodepth=128 00:15:20.606 norandommap=0 00:15:20.606 numjobs=1 00:15:20.606 00:15:20.606 verify_dump=1 00:15:20.606 verify_backlog=512 00:15:20.606 verify_state_save=0 00:15:20.606 do_verify=1 00:15:20.606 verify=crc32c-intel 00:15:20.607 [job0] 00:15:20.607 filename=/dev/nvme0n1 00:15:20.607 [job1] 00:15:20.607 filename=/dev/nvme0n2 00:15:20.607 [job2] 00:15:20.607 filename=/dev/nvme0n3 00:15:20.607 [job3] 00:15:20.607 filename=/dev/nvme0n4 00:15:20.607 Could not set queue depth (nvme0n1) 00:15:20.607 Could not set queue depth (nvme0n2) 00:15:20.607 Could not set queue depth (nvme0n3) 00:15:20.607 Could not set queue depth (nvme0n4) 00:15:20.864 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:20.864 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:20.864 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:20.864 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:20.864 fio-3.35 00:15:20.864 Starting 4 threads 00:15:22.239 00:15:22.239 job0: (groupid=0, jobs=1): err= 0: pid=259391: Mon Dec 9 23:56:56 2024 00:15:22.239 read: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec) 00:15:22.239 slat (nsec): min=1217, max=7810.8k, avg=70045.26, stdev=489269.57 00:15:22.239 clat (usec): min=2855, max=15779, avg=8660.37, stdev=2082.64 00:15:22.239 lat (usec): min=2859, max=15795, avg=8730.42, stdev=2112.72 00:15:22.239 clat percentiles (usec): 00:15:22.239 | 1.00th=[ 3687], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7308], 00:15:22.239 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8356], 00:15:22.239 | 70.00th=[ 8717], 80.00th=[10159], 90.00th=[11994], 95.00th=[13304], 00:15:22.239 | 99.00th=[14746], 99.50th=[14877], 99.90th=[15401], 99.95th=[15401], 00:15:22.239 | 99.99th=[15795] 00:15:22.239 write: IOPS=7982, BW=31.2MiB/s (32.7MB/s)(31.4MiB/1008msec); 0 zone resets 00:15:22.239 slat (nsec): min=1981, max=5890.2k, avg=51869.26, stdev=214307.56 00:15:22.239 clat (usec): min=1828, max=15676, avg=7613.96, stdev=1763.96 00:15:22.239 lat (usec): min=1835, max=15679, avg=7665.83, stdev=1779.84 00:15:22.239 clat percentiles (usec): 00:15:22.239 | 1.00th=[ 2671], 5.00th=[ 3884], 10.00th=[ 4948], 20.00th=[ 6915], 00:15:22.239 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8094], 00:15:22.239 | 70.00th=[ 8225], 80.00th=[ 8291], 90.00th=[ 8455], 95.00th=[10028], 00:15:22.239 | 99.00th=[13042], 99.50th=[14484], 99.90th=[15139], 99.95th=[15664], 00:15:22.239 | 99.99th=[15664] 00:15:22.239 bw ( KiB/s): min=30584, max=32768, per=51.35%, avg=31676.00, stdev=1544.32, samples=2 00:15:22.239 iops : min= 7646, max= 8192, avg=7919.00, stdev=386.08, samples=2 00:15:22.239 lat (msec) : 2=0.08%, 4=3.39%, 10=83.72%, 20=12.81% 00:15:22.239 cpu : usr=5.06%, sys=8.04%, ctx=977, majf=0, minf=1 00:15:22.239 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:22.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:22.239 issued rwts: total=7680,8046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.239 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:22.239 job1: (groupid=0, jobs=1): err= 0: pid=259392: Mon Dec 9 23:56:56 2024 00:15:22.239 read: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec) 00:15:22.239 slat (nsec): min=1098, max=17828k, avg=152399.90, stdev=1208840.81 00:15:22.239 clat (usec): min=4132, max=66165, avg=18513.98, stdev=9332.63 00:15:22.239 lat (usec): min=4138, max=66171, avg=18666.38, stdev=9448.05 00:15:22.239 clat percentiles (usec): 00:15:22.239 | 1.00th=[ 4424], 5.00th=[ 7111], 10.00th=[ 8979], 20.00th=[11863], 00:15:22.240 | 30.00th=[12649], 40.00th=[15270], 50.00th=[17957], 60.00th=[18482], 00:15:22.240 | 70.00th=[19268], 80.00th=[26084], 90.00th=[26870], 95.00th=[32900], 00:15:22.240 | 99.00th=[61604], 99.50th=[64750], 99.90th=[66323], 99.95th=[66323], 00:15:22.240 | 99.99th=[66323] 00:15:22.240 write: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(11.7MiB/1016msec); 0 zone resets 00:15:22.240 slat (usec): min=2, max=27505, avg=191.83, stdev=1224.19 00:15:22.240 clat (usec): min=530, max=91802, avg=27117.76, stdev=19711.68 00:15:22.240 lat (usec): min=555, max=91811, avg=27309.59, stdev=19820.31 00:15:22.240 clat percentiles (usec): 00:15:22.240 | 1.00th=[ 4293], 5.00th=[ 6980], 10.00th=[ 7373], 20.00th=[ 9372], 00:15:22.240 | 30.00th=[15533], 40.00th=[23462], 50.00th=[25035], 60.00th=[26346], 00:15:22.240 | 70.00th=[26608], 80.00th=[33817], 90.00th=[58459], 95.00th=[73925], 00:15:22.240 | 99.00th=[89654], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:15:22.240 | 99.99th=[91751] 00:15:22.240 bw ( KiB/s): min=10616, max=12288, per=18.57%, avg=11452.00, stdev=1182.28, samples=2 00:15:22.240 iops : min= 2654, max= 3072, avg=2863.00, stdev=295.57, samples=2 00:15:22.240 lat (usec) : 750=0.04% 00:15:22.240 lat (msec) : 4=0.40%, 10=16.63%, 20=38.61%, 50=36.18%, 100=8.14% 00:15:22.240 cpu : usr=1.87%, sys=2.86%, ctx=243, majf=0, minf=1 00:15:22.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:22.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:22.240 issued rwts: total=2560,2990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:22.240 job2: (groupid=0, jobs=1): err= 0: pid=259393: Mon Dec 9 23:56:56 2024 00:15:22.240 read: IOPS=1950, BW=7801KiB/s (7988kB/s)(8160KiB/1046msec) 00:15:22.240 slat (nsec): min=1368, max=17394k, avg=163229.64, stdev=1187613.84 00:15:22.240 clat (usec): min=7005, max=54143, avg=21855.59, stdev=10938.02 00:15:22.240 lat (usec): min=7022, max=57956, avg=22018.82, stdev=10996.45 00:15:22.240 clat percentiles (usec): 00:15:22.240 | 1.00th=[10945], 5.00th=[11076], 10.00th=[11207], 20.00th=[11207], 00:15:22.240 | 30.00th=[11338], 40.00th=[18482], 50.00th=[19792], 60.00th=[23725], 00:15:22.240 | 70.00th=[26084], 80.00th=[27395], 90.00th=[33424], 95.00th=[53216], 00:15:22.240 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:15:22.240 | 99.99th=[54264] 00:15:22.240 write: IOPS=1957, BW=7832KiB/s (8020kB/s)(8192KiB/1046msec); 0 zone resets 00:15:22.240 slat (usec): min=2, max=29033, avg=318.58, stdev=1706.48 00:15:22.240 clat (msec): min=3, max=130, avg=42.79, stdev=27.87 00:15:22.240 lat (msec): min=3, max=130, avg=43.11, stdev=28.03 00:15:22.240 clat percentiles (msec): 00:15:22.240 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 25], 20.00th=[ 26], 00:15:22.240 | 30.00th=[ 27], 40.00th=[ 27], 50.00th=[ 27], 60.00th=[ 32], 00:15:22.240 | 70.00th=[ 42], 80.00th=[ 69], 90.00th=[ 91], 95.00th=[ 107], 00:15:22.240 | 99.00th=[ 127], 99.50th=[ 127], 99.90th=[ 131], 99.95th=[ 131], 00:15:22.240 | 99.99th=[ 131] 00:15:22.240 bw ( KiB/s): min= 7952, max= 8432, per=13.28%, avg=8192.00, stdev=339.41, samples=2 00:15:22.240 iops : min= 1988, max= 2108, avg=2048.00, stdev=84.85, samples=2 00:15:22.240 lat (msec) : 4=0.15%, 10=0.88%, 20=27.20%, 50=55.26%, 100=13.82% 00:15:22.240 lat (msec) : 250=2.69% 00:15:22.240 cpu : usr=1.24%, sys=3.35%, ctx=232, majf=0, minf=1 00:15:22.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:15:22.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:22.240 issued rwts: total=2040,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:22.240 job3: (groupid=0, jobs=1): err= 0: pid=259394: Mon Dec 9 23:56:56 2024 00:15:22.240 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:15:22.240 slat (nsec): min=1977, max=35659k, avg=197478.71, stdev=1564073.82 00:15:22.240 clat (msec): min=6, max=119, avg=22.06, stdev=16.20 00:15:22.240 lat (msec): min=6, max=119, avg=22.26, stdev=16.37 00:15:22.240 clat percentiles (msec): 00:15:22.240 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 12], 00:15:22.240 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 19], 60.00th=[ 21], 00:15:22.240 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 42], 95.00th=[ 49], 00:15:22.240 | 99.00th=[ 103], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 121], 00:15:22.240 | 99.99th=[ 121] 00:15:22.240 write: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(11.9MiB/1015msec); 0 zone resets 00:15:22.240 slat (usec): min=2, max=18878, avg=155.75, stdev=929.74 00:15:22.240 clat (usec): min=1581, max=119559, avg=23743.89, stdev=17355.94 00:15:22.240 lat (usec): min=1594, max=119562, avg=23899.63, stdev=17448.42 00:15:22.240 clat percentiles (msec): 00:15:22.240 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:15:22.240 | 30.00th=[ 11], 40.00th=[ 14], 50.00th=[ 25], 60.00th=[ 26], 00:15:22.240 | 70.00th=[ 27], 80.00th=[ 27], 90.00th=[ 50], 95.00th=[ 64], 00:15:22.240 | 99.00th=[ 80], 99.50th=[ 93], 99.90th=[ 101], 99.95th=[ 121], 00:15:22.240 | 99.99th=[ 121] 00:15:22.240 bw ( KiB/s): min=11064, max=12288, per=18.93%, avg=11676.00, stdev=865.50, samples=2 00:15:22.240 iops : min= 2766, max= 3072, avg=2919.00, stdev=216.37, samples=2 00:15:22.240 lat (msec) : 2=0.04%, 4=0.21%, 10=17.04%, 20=34.12%, 50=41.35% 00:15:22.240 lat (msec) : 100=6.42%, 250=0.82% 00:15:22.240 cpu : usr=2.46%, sys=4.14%, ctx=239, majf=0, minf=2 00:15:22.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:22.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:22.240 issued rwts: total=2560,3046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:22.240 00:15:22.240 Run status group 0 (all jobs): 00:15:22.240 READ: bw=55.4MiB/s (58.1MB/s), 7801KiB/s-29.8MiB/s (7988kB/s-31.2MB/s), io=58.0MiB (60.8MB), run=1008-1046msec 00:15:22.240 WRITE: bw=60.2MiB/s (63.2MB/s), 7832KiB/s-31.2MiB/s (8020kB/s-32.7MB/s), io=63.0MiB (66.1MB), run=1008-1046msec 00:15:22.240 00:15:22.240 Disk stats (read/write): 00:15:22.240 nvme0n1: ios=6697/6663, merge=0/0, ticks=55060/49353, in_queue=104413, util=87.07% 00:15:22.240 nvme0n2: ios=2097/2558, merge=0/0, ticks=38867/65588, in_queue=104455, util=89.64% 00:15:22.240 nvme0n3: ios=1599/1695, merge=0/0, ticks=31488/67902, in_queue=99390, util=94.48% 00:15:22.240 nvme0n4: ios=2356/2560, merge=0/0, ticks=50027/53714, in_queue=103741, util=95.60% 00:15:22.240 23:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:22.240 23:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=259524 00:15:22.240 23:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:22.240 23:56:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:22.240 [global] 00:15:22.240 thread=1 00:15:22.240 invalidate=1 00:15:22.240 rw=read 00:15:22.240 time_based=1 00:15:22.240 runtime=10 00:15:22.240 ioengine=libaio 00:15:22.240 direct=1 00:15:22.240 bs=4096 00:15:22.240 iodepth=1 00:15:22.240 norandommap=1 00:15:22.240 numjobs=1 00:15:22.240 00:15:22.240 [job0] 00:15:22.240 filename=/dev/nvme0n1 00:15:22.240 [job1] 00:15:22.240 filename=/dev/nvme0n2 00:15:22.240 [job2] 00:15:22.240 filename=/dev/nvme0n3 00:15:22.240 [job3] 00:15:22.240 filename=/dev/nvme0n4 00:15:22.240 Could not set queue depth (nvme0n1) 00:15:22.240 Could not set queue depth (nvme0n2) 00:15:22.240 Could not set queue depth (nvme0n3) 00:15:22.240 Could not set queue depth (nvme0n4) 00:15:22.240 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:22.240 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:22.240 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:22.240 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:22.240 fio-3.35 00:15:22.240 Starting 4 threads 00:15:25.525 23:56:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:25.525 23:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:25.525 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2195456, buflen=4096 00:15:25.525 fio: pid=259765, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:25.525 23:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:25.525 23:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:25.525 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=352256, buflen=4096 00:15:25.525 fio: pid=259764, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:25.525 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=51372032, buflen=4096 00:15:25.525 fio: pid=259762, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:25.525 23:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:25.525 23:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:25.785 23:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:25.785 23:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:25.785 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11616256, buflen=4096 00:15:25.785 fio: pid=259763, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:26.044 00:15:26.044 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=259762: Mon Dec 9 23:57:00 2024 00:15:26.044 read: IOPS=3995, BW=15.6MiB/s (16.4MB/s)(49.0MiB/3139msec) 00:15:26.044 slat (usec): min=7, max=28133, avg=11.88, stdev=279.04 00:15:26.044 clat (usec): min=156, max=41197, avg=234.70, stdev=888.99 00:15:26.044 lat (usec): min=164, max=41206, avg=246.58, stdev=932.03 00:15:26.044 clat percentiles (usec): 00:15:26.044 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 194], 00:15:26.044 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:15:26.044 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 260], 00:15:26.044 | 99.00th=[ 310], 99.50th=[ 375], 99.90th=[ 482], 99.95th=[ 1876], 00:15:26.044 | 99.99th=[41157] 00:15:26.044 bw ( KiB/s): min=12888, max=19104, per=85.68%, avg=16157.50, stdev=2233.63, samples=6 00:15:26.044 iops : min= 3222, max= 4776, avg=4039.33, stdev=558.44, samples=6 00:15:26.044 lat (usec) : 250=91.29%, 500=8.61% 00:15:26.044 lat (msec) : 2=0.05%, 50=0.05% 00:15:26.044 cpu : usr=2.17%, sys=6.60%, ctx=12547, majf=0, minf=1 00:15:26.044 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:26.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.045 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.045 issued rwts: total=12543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:26.045 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=259763: Mon Dec 9 23:57:00 2024 00:15:26.045 read: IOPS=835, BW=3342KiB/s (3423kB/s)(11.1MiB/3394msec) 00:15:26.045 slat (usec): min=7, max=16710, avg=19.01, stdev=395.68 00:15:26.045 clat (usec): min=164, max=43008, avg=1167.96, stdev=6117.15 00:15:26.045 lat (usec): min=172, max=57975, avg=1186.97, stdev=6199.88 00:15:26.045 clat percentiles (usec): 00:15:26.045 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 215], 00:15:26.045 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:15:26.045 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:15:26.045 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:26.045 | 99.99th=[43254] 00:15:26.045 bw ( KiB/s): min= 93, max=16792, per=19.98%, avg=3768.83, stdev=6728.69, samples=6 00:15:26.045 iops : min= 23, max= 4198, avg=942.17, stdev=1682.20, samples=6 00:15:26.045 lat (usec) : 250=80.90%, 500=16.78% 00:15:26.045 lat (msec) : 50=2.29% 00:15:26.045 cpu : usr=0.53%, sys=1.33%, ctx=2840, majf=0, minf=2 00:15:26.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:26.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.045 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.045 issued rwts: total=2837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:26.045 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=259764: Mon Dec 9 23:57:00 2024 00:15:26.045 read: IOPS=29, BW=117KiB/s (120kB/s)(344KiB/2947msec) 00:15:26.045 slat (nsec): min=7308, max=80208, avg=21541.77, stdev=8488.06 00:15:26.045 clat (usec): min=208, max=42946, avg=33995.95, stdev=15603.45 00:15:26.045 lat (usec): min=218, max=42969, avg=34017.47, stdev=15604.08 00:15:26.045 clat percentiles (usec): 00:15:26.045 | 1.00th=[ 208], 5.00th=[ 225], 10.00th=[ 249], 20.00th=[40633], 00:15:26.045 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:26.045 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:15:26.045 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:15:26.045 | 99.99th=[42730] 00:15:26.045 bw ( KiB/s): min= 96, max= 128, per=0.55%, avg=104.00, stdev=13.86, samples=5 00:15:26.045 iops : min= 24, max= 32, avg=26.00, stdev= 3.46, samples=5 00:15:26.045 lat (usec) : 250=10.34%, 500=6.90% 00:15:26.045 lat (msec) : 50=81.61% 00:15:26.045 cpu : usr=0.10%, sys=0.00%, ctx=89, majf=0, minf=2 00:15:26.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:26.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.045 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.045 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:26.045 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=259765: Mon Dec 9 23:57:00 2024 00:15:26.045 read: IOPS=196, BW=786KiB/s (805kB/s)(2144KiB/2728msec) 00:15:26.045 slat (nsec): min=3324, max=34497, avg=9302.73, stdev=5051.41 00:15:26.045 clat (usec): min=190, max=41987, avg=5037.94, stdev=13107.27 00:15:26.045 lat (usec): min=197, max=42008, avg=5047.22, stdev=13108.32 00:15:26.045 clat percentiles (usec): 00:15:26.045 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 227], 00:15:26.045 | 30.00th=[ 243], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:15:26.045 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[40633], 95.00th=[41157], 00:15:26.045 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:15:26.045 | 99.99th=[42206] 00:15:26.045 bw ( KiB/s): min= 216, max= 1992, per=3.19%, avg=601.60, stdev=777.92, samples=5 00:15:26.045 iops : min= 54, max= 498, avg=150.40, stdev=194.48, samples=5 00:15:26.045 lat (usec) : 250=33.33%, 500=54.00%, 750=0.74% 00:15:26.045 lat (msec) : 50=11.73% 00:15:26.045 cpu : usr=0.11%, sys=0.18%, ctx=537, majf=0, minf=1 00:15:26.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:26.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.045 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.045 issued rwts: total=537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:26.045 00:15:26.045 Run status group 0 (all jobs): 00:15:26.045 READ: bw=18.4MiB/s (19.3MB/s), 117KiB/s-15.6MiB/s (120kB/s-16.4MB/s), io=62.5MiB (65.5MB), run=2728-3394msec 00:15:26.045 00:15:26.045 Disk stats (read/write): 00:15:26.045 nvme0n1: ios=12514/0, merge=0/0, ticks=3605/0, in_queue=3605, util=97.97% 00:15:26.045 nvme0n2: ios=2873/0, merge=0/0, ticks=3425/0, in_queue=3425, util=98.71% 00:15:26.045 nvme0n3: ios=131/0, merge=0/0, ticks=3769/0, in_queue=3769, util=99.39% 00:15:26.045 nvme0n4: ios=524/0, merge=0/0, ticks=2574/0, in_queue=2574, util=96.48% 00:15:26.045 23:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:26.045 23:57:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:26.304 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:26.304 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:26.564 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:26.564 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:26.823 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:26.823 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:26.823 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:26.823 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 259524 00:15:26.823 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:26.823 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:27.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.083 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:27.083 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:15:27.083 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:27.083 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.083 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:27.083 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:27.083 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:15:27.083 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:27.083 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:27.083 nvmf hotplug test: fio failed as expected 00:15:27.083 23:57:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:27.342 rmmod nvme_tcp 00:15:27.342 rmmod nvme_fabrics 00:15:27.342 rmmod nvme_keyring 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 256698 ']' 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 256698 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 256698 ']' 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 256698 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256698 00:15:27.342 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.343 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.343 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256698' 00:15:27.343 killing process with pid 256698 00:15:27.343 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 256698 00:15:27.343 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 256698 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.603 23:57:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.510 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:29.510 00:15:29.511 real 0m26.924s 00:15:29.511 user 1m46.256s 00:15:29.511 sys 0m8.508s 00:15:29.511 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.511 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.511 ************************************ 00:15:29.511 END TEST nvmf_fio_target 00:15:29.511 ************************************ 00:15:29.770 23:57:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:29.770 23:57:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.770 23:57:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.770 23:57:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:29.770 ************************************ 00:15:29.770 START TEST nvmf_bdevio 00:15:29.770 ************************************ 00:15:29.770 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:29.770 * Looking for test storage... 00:15:29.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:15:29.770 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:29.770 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:29.770 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:29.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.771 --rc genhtml_branch_coverage=1 00:15:29.771 --rc genhtml_function_coverage=1 00:15:29.771 --rc genhtml_legend=1 00:15:29.771 --rc geninfo_all_blocks=1 00:15:29.771 --rc geninfo_unexecuted_blocks=1 00:15:29.771 00:15:29.771 ' 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:29.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.771 --rc genhtml_branch_coverage=1 00:15:29.771 --rc genhtml_function_coverage=1 00:15:29.771 --rc genhtml_legend=1 00:15:29.771 --rc geninfo_all_blocks=1 00:15:29.771 --rc geninfo_unexecuted_blocks=1 00:15:29.771 00:15:29.771 ' 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:29.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.771 --rc genhtml_branch_coverage=1 00:15:29.771 --rc genhtml_function_coverage=1 00:15:29.771 --rc genhtml_legend=1 00:15:29.771 --rc geninfo_all_blocks=1 00:15:29.771 --rc geninfo_unexecuted_blocks=1 00:15:29.771 00:15:29.771 ' 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:29.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.771 --rc genhtml_branch_coverage=1 00:15:29.771 --rc genhtml_function_coverage=1 00:15:29.771 --rc genhtml_legend=1 00:15:29.771 --rc geninfo_all_blocks=1 00:15:29.771 --rc geninfo_unexecuted_blocks=1 00:15:29.771 00:15:29.771 ' 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.771 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:30.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:30.031 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.032 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:30.032 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:30.032 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:30.032 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.032 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.032 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.032 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:30.032 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:30.032 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:15:30.032 23:57:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:36.609 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:36.609 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:36.609 Found net devices under 0000:86:00.0: cvl_0_0 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.609 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:36.610 Found net devices under 0000:86:00.1: cvl_0_1 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:36.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:15:36.610 00:15:36.610 --- 10.0.0.2 ping statistics --- 00:15:36.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.610 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:15:36.610 00:15:36.610 --- 10.0.0.1 ping statistics --- 00:15:36.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.610 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=264544 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 264544 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 264544 ']' 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.610 [2024-12-09 23:57:10.762509] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:15:36.610 [2024-12-09 23:57:10.762568] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.610 [2024-12-09 23:57:10.845758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.610 [2024-12-09 23:57:10.887182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.610 [2024-12-09 23:57:10.887219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.610 [2024-12-09 23:57:10.887226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.610 [2024-12-09 23:57:10.887232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.610 [2024-12-09 23:57:10.887238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.610 [2024-12-09 23:57:10.888862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:36.610 [2024-12-09 23:57:10.888972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:36.610 [2024-12-09 23:57:10.889078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.610 [2024-12-09 23:57:10.889080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:36.610 23:57:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.610 [2024-12-09 23:57:11.026585] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.610 Malloc0 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:36.610 [2024-12-09 23:57:11.090972] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:15:36.610 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:15:36.611 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:36.611 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:36.611 { 00:15:36.611 "params": { 00:15:36.611 "name": "Nvme$subsystem", 00:15:36.611 "trtype": "$TEST_TRANSPORT", 00:15:36.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:36.611 "adrfam": "ipv4", 00:15:36.611 "trsvcid": "$NVMF_PORT", 00:15:36.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:36.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:36.611 "hdgst": ${hdgst:-false}, 00:15:36.611 "ddgst": ${ddgst:-false} 00:15:36.611 }, 00:15:36.611 "method": "bdev_nvme_attach_controller" 00:15:36.611 } 00:15:36.611 EOF 00:15:36.611 )") 00:15:36.611 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:15:36.611 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:15:36.611 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:15:36.611 23:57:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:36.611 "params": { 00:15:36.611 "name": "Nvme1", 00:15:36.611 "trtype": "tcp", 00:15:36.611 "traddr": "10.0.0.2", 00:15:36.611 "adrfam": "ipv4", 00:15:36.611 "trsvcid": "4420", 00:15:36.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.611 "hdgst": false, 00:15:36.611 "ddgst": false 00:15:36.611 }, 00:15:36.611 "method": "bdev_nvme_attach_controller" 00:15:36.611 }' 00:15:36.611 [2024-12-09 23:57:11.140043] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:15:36.611 [2024-12-09 23:57:11.140088] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid264771 ] 00:15:36.611 [2024-12-09 23:57:11.216212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:36.611 [2024-12-09 23:57:11.258948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.611 [2024-12-09 23:57:11.259056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.611 [2024-12-09 23:57:11.259056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.869 I/O targets: 00:15:36.869 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:36.869 00:15:36.869 00:15:36.869 CUnit - A unit testing framework for C - Version 2.1-3 00:15:36.869 http://cunit.sourceforge.net/ 00:15:36.869 00:15:36.869 00:15:36.869 Suite: bdevio tests on: Nvme1n1 00:15:36.869 Test: blockdev write read block ...passed 00:15:36.869 Test: blockdev write zeroes read block ...passed 00:15:36.869 Test: blockdev write zeroes read no split ...passed 00:15:36.869 Test: blockdev write zeroes read split ...passed 00:15:36.869 Test: blockdev write zeroes read split partial ...passed 00:15:36.869 Test: blockdev reset ...[2024-12-09 23:57:11.696004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:36.869 [2024-12-09 23:57:11.696066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad0f30 (9): Bad file descriptor 00:15:36.869 [2024-12-09 23:57:11.708978] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:36.869 passed 00:15:36.869 Test: blockdev write read 8 blocks ...passed 00:15:36.869 Test: blockdev write read size > 128k ...passed 00:15:36.869 Test: blockdev write read invalid size ...passed 00:15:36.869 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:36.869 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:36.869 Test: blockdev write read max offset ...passed 00:15:37.127 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:37.127 Test: blockdev writev readv 8 blocks ...passed 00:15:37.127 Test: blockdev writev readv 30 x 1block ...passed 00:15:37.127 Test: blockdev writev readv block ...passed 00:15:37.127 Test: blockdev writev readv size > 128k ...passed 00:15:37.127 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:37.127 Test: blockdev comparev and writev ...[2024-12-09 23:57:11.964252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.127 [2024-12-09 23:57:11.964280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:37.127 [2024-12-09 23:57:11.964295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.127 [2024-12-09 23:57:11.964303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:37.127 [2024-12-09 23:57:11.964563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.127 [2024-12-09 23:57:11.964573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:37.127 [2024-12-09 23:57:11.964585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.127 [2024-12-09 23:57:11.964596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:37.127 [2024-12-09 23:57:11.964836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.127 [2024-12-09 23:57:11.964846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:37.127 [2024-12-09 23:57:11.964857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.127 [2024-12-09 23:57:11.964864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:37.127 [2024-12-09 23:57:11.965113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.127 [2024-12-09 23:57:11.965128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:37.127 [2024-12-09 23:57:11.965139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.127 [2024-12-09 23:57:11.965146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:37.127 passed 00:15:37.127 Test: blockdev nvme passthru rw ...passed 00:15:37.127 Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:57:12.047542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.127 [2024-12-09 23:57:12.047558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:37.127 [2024-12-09 23:57:12.047668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.127 [2024-12-09 23:57:12.047678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:37.127 [2024-12-09 23:57:12.047781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.127 [2024-12-09 23:57:12.047790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:37.127 [2024-12-09 23:57:12.047890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.127 [2024-12-09 23:57:12.047899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:37.127 passed 00:15:37.386 Test: blockdev nvme admin passthru ...passed 00:15:37.386 Test: blockdev copy ...passed 00:15:37.386 00:15:37.386 Run Summary: Type Total Ran Passed Failed Inactive 00:15:37.386 suites 1 1 n/a 0 0 00:15:37.386 tests 23 23 23 0 0 00:15:37.386 asserts 152 152 152 0 n/a 00:15:37.386 00:15:37.386 Elapsed time = 1.060 seconds 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:37.386 rmmod nvme_tcp 00:15:37.386 rmmod nvme_fabrics 00:15:37.386 rmmod nvme_keyring 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 264544 ']' 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 264544 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 264544 ']' 00:15:37.386 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 264544 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 264544 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 264544' 00:15:37.646 killing process with pid 264544 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 264544 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 264544 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.646 23:57:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:40.185 00:15:40.185 real 0m10.113s 00:15:40.185 user 0m10.753s 00:15:40.185 sys 0m4.986s 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:40.185 ************************************ 00:15:40.185 END TEST nvmf_bdevio 00:15:40.185 ************************************ 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:40.185 00:15:40.185 real 4m35.940s 00:15:40.185 user 10m29.142s 00:15:40.185 sys 1m37.558s 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:40.185 ************************************ 00:15:40.185 END TEST nvmf_target_core 00:15:40.185 ************************************ 00:15:40.185 23:57:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:40.185 23:57:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:40.185 23:57:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.185 23:57:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:40.185 ************************************ 00:15:40.185 START TEST nvmf_target_extra 00:15:40.185 ************************************ 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:40.185 * Looking for test storage... 00:15:40.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:40.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.185 --rc genhtml_branch_coverage=1 00:15:40.185 --rc genhtml_function_coverage=1 00:15:40.185 --rc genhtml_legend=1 00:15:40.185 --rc geninfo_all_blocks=1 00:15:40.185 --rc geninfo_unexecuted_blocks=1 00:15:40.185 00:15:40.185 ' 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:40.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.185 --rc genhtml_branch_coverage=1 00:15:40.185 --rc genhtml_function_coverage=1 00:15:40.185 --rc genhtml_legend=1 00:15:40.185 --rc geninfo_all_blocks=1 00:15:40.185 --rc geninfo_unexecuted_blocks=1 00:15:40.185 00:15:40.185 ' 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:40.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.185 --rc genhtml_branch_coverage=1 00:15:40.185 --rc genhtml_function_coverage=1 00:15:40.185 --rc genhtml_legend=1 00:15:40.185 --rc geninfo_all_blocks=1 00:15:40.185 --rc geninfo_unexecuted_blocks=1 00:15:40.185 00:15:40.185 ' 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:40.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.185 --rc genhtml_branch_coverage=1 00:15:40.185 --rc genhtml_function_coverage=1 00:15:40.185 --rc genhtml_legend=1 00:15:40.185 --rc geninfo_all_blocks=1 00:15:40.185 --rc geninfo_unexecuted_blocks=1 00:15:40.185 00:15:40.185 ' 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.185 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.186 ************************************ 00:15:40.186 START TEST nvmf_example 00:15:40.186 ************************************ 00:15:40.186 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:40.186 * Looking for test storage... 00:15:40.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:15:40.186 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:40.186 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:15:40.186 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:40.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.446 --rc genhtml_branch_coverage=1 00:15:40.446 --rc genhtml_function_coverage=1 00:15:40.446 --rc genhtml_legend=1 00:15:40.446 --rc geninfo_all_blocks=1 00:15:40.446 --rc geninfo_unexecuted_blocks=1 00:15:40.446 00:15:40.446 ' 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:40.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.446 --rc genhtml_branch_coverage=1 00:15:40.446 --rc genhtml_function_coverage=1 00:15:40.446 --rc genhtml_legend=1 00:15:40.446 --rc geninfo_all_blocks=1 00:15:40.446 --rc geninfo_unexecuted_blocks=1 00:15:40.446 00:15:40.446 ' 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:40.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.446 --rc genhtml_branch_coverage=1 00:15:40.446 --rc genhtml_function_coverage=1 00:15:40.446 --rc genhtml_legend=1 00:15:40.446 --rc geninfo_all_blocks=1 00:15:40.446 --rc geninfo_unexecuted_blocks=1 00:15:40.446 00:15:40.446 ' 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:40.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.446 --rc genhtml_branch_coverage=1 00:15:40.446 --rc genhtml_function_coverage=1 00:15:40.446 --rc genhtml_legend=1 00:15:40.446 --rc geninfo_all_blocks=1 00:15:40.446 --rc geninfo_unexecuted_blocks=1 00:15:40.446 00:15:40.446 ' 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.446 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:15:40.447 23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:47.023 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:47.023 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:47.023 Found net devices under 0000:86:00.0: cvl_0_0 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:47.023 Found net devices under 0000:86:00.1: cvl_0_1 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.023 23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.023 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.023 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.023 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:47.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:15:47.024 00:15:47.024 --- 10.0.0.2 ping statistics --- 00:15:47.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.024 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:15:47.024 00:15:47.024 --- 10.0.0.1 ping statistics --- 00:15:47.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.024 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=268601 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 268601 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 268601 ']' 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.024 23:57:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:47.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:15:47.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:47.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:47.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:47.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:47.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.283 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:15:47.542 23:57:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:59.751 Initializing NVMe Controllers 00:15:59.751 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:59.751 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:59.751 Initialization complete. Launching workers. 00:15:59.751 ======================================================== 00:15:59.751 Latency(us) 00:15:59.751 Device Information : IOPS MiB/s Average min max 00:15:59.751 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18142.80 70.87 3527.10 527.44 15691.21 00:15:59.751 ======================================================== 00:15:59.751 Total : 18142.80 70.87 3527.10 527.44 15691.21 00:15:59.751 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:59.751 rmmod nvme_tcp 00:15:59.751 rmmod nvme_fabrics 00:15:59.751 rmmod nvme_keyring 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 268601 ']' 00:15:59.751 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 268601 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 268601 ']' 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 268601 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 268601 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 268601' 00:15:59.752 killing process with pid 268601 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 268601 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 268601 00:15:59.752 nvmf threads initialize successfully 00:15:59.752 bdev subsystem init successfully 00:15:59.752 created a nvmf target service 00:15:59.752 create targets's poll groups done 00:15:59.752 all subsystems of target started 00:15:59.752 nvmf target is running 00:15:59.752 all subsystems of target stopped 00:15:59.752 destroy targets's poll groups done 00:15:59.752 destroyed the nvmf target service 00:15:59.752 bdev subsystem finish successfully 00:15:59.752 nvmf threads destroy successfully 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.752 23:57:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.325 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:00.325 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:16:00.325 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:00.325 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:00.325 00:16:00.325 real 0m20.024s 00:16:00.325 user 0m46.577s 00:16:00.325 sys 0m5.980s 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:16:00.325 ************************************ 00:16:00.325 END TEST nvmf_example 00:16:00.325 ************************************ 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:00.325 ************************************ 00:16:00.325 START TEST nvmf_filesystem 00:16:00.325 ************************************ 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:16:00.325 * Looking for test storage... 00:16:00.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:16:00.325 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:00.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.326 --rc genhtml_branch_coverage=1 00:16:00.326 --rc genhtml_function_coverage=1 00:16:00.326 --rc genhtml_legend=1 00:16:00.326 --rc geninfo_all_blocks=1 00:16:00.326 --rc geninfo_unexecuted_blocks=1 00:16:00.326 00:16:00.326 ' 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:00.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.326 --rc genhtml_branch_coverage=1 00:16:00.326 --rc genhtml_function_coverage=1 00:16:00.326 --rc genhtml_legend=1 00:16:00.326 --rc geninfo_all_blocks=1 00:16:00.326 --rc geninfo_unexecuted_blocks=1 00:16:00.326 00:16:00.326 ' 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:00.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.326 --rc genhtml_branch_coverage=1 00:16:00.326 --rc genhtml_function_coverage=1 00:16:00.326 --rc genhtml_legend=1 00:16:00.326 --rc geninfo_all_blocks=1 00:16:00.326 --rc geninfo_unexecuted_blocks=1 00:16:00.326 00:16:00.326 ' 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:00.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.326 --rc genhtml_branch_coverage=1 00:16:00.326 --rc genhtml_function_coverage=1 00:16:00.326 --rc genhtml_legend=1 00:16:00.326 --rc geninfo_all_blocks=1 00:16:00.326 --rc geninfo_unexecuted_blocks=1 00:16:00.326 00:16:00.326 ' 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output ']' 00:16:00.326 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/build_config.sh ]] 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/build_config.sh 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:16:00.606 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/applications.sh 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/applications.sh 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/config.h ]] 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:00.607 #define SPDK_CONFIG_H 00:16:00.607 #define SPDK_CONFIG_AIO_FSDEV 1 00:16:00.607 #define SPDK_CONFIG_APPS 1 00:16:00.607 #define SPDK_CONFIG_ARCH native 00:16:00.607 #undef SPDK_CONFIG_ASAN 00:16:00.607 #undef SPDK_CONFIG_AVAHI 00:16:00.607 #undef SPDK_CONFIG_CET 00:16:00.607 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:16:00.607 #define SPDK_CONFIG_COVERAGE 1 00:16:00.607 #define SPDK_CONFIG_CROSS_PREFIX 00:16:00.607 #undef SPDK_CONFIG_CRYPTO 00:16:00.607 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:00.607 #undef SPDK_CONFIG_CUSTOMOCF 00:16:00.607 #undef SPDK_CONFIG_DAOS 00:16:00.607 #define SPDK_CONFIG_DAOS_DIR 00:16:00.607 #define SPDK_CONFIG_DEBUG 1 00:16:00.607 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:00.607 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:16:00.607 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:00.607 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:00.607 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:00.607 #undef SPDK_CONFIG_DPDK_UADK 00:16:00.607 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:16:00.607 #define SPDK_CONFIG_EXAMPLES 1 00:16:00.607 #undef SPDK_CONFIG_FC 00:16:00.607 #define SPDK_CONFIG_FC_PATH 00:16:00.607 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:00.607 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:00.607 #define SPDK_CONFIG_FSDEV 1 00:16:00.607 #undef SPDK_CONFIG_FUSE 00:16:00.607 #undef SPDK_CONFIG_FUZZER 00:16:00.607 #define SPDK_CONFIG_FUZZER_LIB 00:16:00.607 #undef SPDK_CONFIG_GOLANG 00:16:00.607 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:00.607 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:16:00.607 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:00.607 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:16:00.607 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:00.607 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:00.607 #undef SPDK_CONFIG_HAVE_LZ4 00:16:00.607 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:16:00.607 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:16:00.607 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:00.607 #define SPDK_CONFIG_IDXD 1 00:16:00.607 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:00.607 #undef SPDK_CONFIG_IPSEC_MB 00:16:00.607 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:00.607 #define SPDK_CONFIG_ISAL 1 00:16:00.607 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:00.607 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:00.607 #define SPDK_CONFIG_LIBDIR 00:16:00.607 #undef SPDK_CONFIG_LTO 00:16:00.607 #define SPDK_CONFIG_MAX_LCORES 128 00:16:00.607 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:16:00.607 #define SPDK_CONFIG_NVME_CUSE 1 00:16:00.607 #undef SPDK_CONFIG_OCF 00:16:00.607 #define SPDK_CONFIG_OCF_PATH 00:16:00.607 #define SPDK_CONFIG_OPENSSL_PATH 00:16:00.607 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:00.607 #define SPDK_CONFIG_PGO_DIR 00:16:00.607 #undef SPDK_CONFIG_PGO_USE 00:16:00.607 #define SPDK_CONFIG_PREFIX /usr/local 00:16:00.607 #undef SPDK_CONFIG_RAID5F 00:16:00.607 #undef SPDK_CONFIG_RBD 00:16:00.607 #define SPDK_CONFIG_RDMA 1 00:16:00.607 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:00.607 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:00.607 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:00.607 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:00.607 #define SPDK_CONFIG_SHARED 1 00:16:00.607 #undef SPDK_CONFIG_SMA 00:16:00.607 #define SPDK_CONFIG_TESTS 1 00:16:00.607 #undef SPDK_CONFIG_TSAN 00:16:00.607 #define SPDK_CONFIG_UBLK 1 00:16:00.607 #define SPDK_CONFIG_UBSAN 1 00:16:00.607 #undef SPDK_CONFIG_UNIT_TESTS 00:16:00.607 #undef SPDK_CONFIG_URING 00:16:00.607 #define SPDK_CONFIG_URING_PATH 00:16:00.607 #undef SPDK_CONFIG_URING_ZNS 00:16:00.607 #undef SPDK_CONFIG_USDT 00:16:00.607 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:00.607 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:00.607 #define SPDK_CONFIG_VFIO_USER 1 00:16:00.607 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:00.607 #define SPDK_CONFIG_VHOST 1 00:16:00.607 #define SPDK_CONFIG_VIRTIO 1 00:16:00.607 #undef SPDK_CONFIG_VTUNE 00:16:00.607 #define SPDK_CONFIG_VTUNE_DIR 00:16:00.607 #define SPDK_CONFIG_WERROR 1 00:16:00.607 #define SPDK_CONFIG_WPDK_DIR 00:16:00.607 #undef SPDK_CONFIG_XNVME 00:16:00.607 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:00.607 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/common 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/common 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/../../../ 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.run_test_name 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power ]] 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:16:00.608 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:16:00.609 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/ar-xnvme-fixer 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/ar-xnvme-fixer 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 271002 ]] 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 271002 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.i6j0rn 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target /tmp/spdk.i6j0rn/tests/target /tmp/spdk.i6j0rn 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=194562691072 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=201248804864 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6686113792 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=100614369280 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100624400384 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=40226734080 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=40249761792 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23027712 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=100624093184 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100624404480 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296 00:16:00.610 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20124864512 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20124876800 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:16:00.611 * Looking for test storage... 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=194562691072 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8900706304 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:16:00.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:00.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.611 --rc genhtml_branch_coverage=1 00:16:00.611 --rc genhtml_function_coverage=1 00:16:00.611 --rc genhtml_legend=1 00:16:00.611 --rc geninfo_all_blocks=1 00:16:00.611 --rc geninfo_unexecuted_blocks=1 00:16:00.611 00:16:00.611 ' 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:00.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.611 --rc genhtml_branch_coverage=1 00:16:00.611 --rc genhtml_function_coverage=1 00:16:00.611 --rc genhtml_legend=1 00:16:00.611 --rc geninfo_all_blocks=1 00:16:00.611 --rc geninfo_unexecuted_blocks=1 00:16:00.611 00:16:00.611 ' 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:00.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.611 --rc genhtml_branch_coverage=1 00:16:00.611 --rc genhtml_function_coverage=1 00:16:00.611 --rc genhtml_legend=1 00:16:00.611 --rc geninfo_all_blocks=1 00:16:00.611 --rc geninfo_unexecuted_blocks=1 00:16:00.611 00:16:00.611 ' 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:00.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.611 --rc genhtml_branch_coverage=1 00:16:00.611 --rc genhtml_function_coverage=1 00:16:00.611 --rc genhtml_legend=1 00:16:00.611 --rc geninfo_all_blocks=1 00:16:00.611 --rc geninfo_unexecuted_blocks=1 00:16:00.611 00:16:00.611 ' 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.611 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:00.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:00.612 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:16:00.871 23:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:07.448 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:07.448 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:07.448 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:07.449 Found net devices under 0000:86:00.0: cvl_0_0 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:07.449 Found net devices under 0000:86:00.1: cvl_0_1 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:07.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:16:07.449 00:16:07.449 --- 10.0.0.2 ping statistics --- 00:16:07.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.449 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:16:07.449 00:16:07.449 --- 10.0.0.1 ping statistics --- 00:16:07.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.449 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:07.449 ************************************ 00:16:07.449 START TEST nvmf_filesystem_no_in_capsule 00:16:07.449 ************************************ 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=274042 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 274042 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 274042 ']' 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.449 [2024-12-09 23:57:41.572887] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:16:07.449 [2024-12-09 23:57:41.572929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.449 [2024-12-09 23:57:41.653342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.449 [2024-12-09 23:57:41.695135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.449 [2024-12-09 23:57:41.695176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.449 [2024-12-09 23:57:41.695184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.449 [2024-12-09 23:57:41.695190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.449 [2024-12-09 23:57:41.695196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.449 [2024-12-09 23:57:41.696621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.449 [2024-12-09 23:57:41.696731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.449 [2024-12-09 23:57:41.696838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.449 [2024-12-09 23:57:41.696839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.449 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.450 [2024-12-09 23:57:41.835241] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.450 Malloc1 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.450 [2024-12-09 23:57:41.994123] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:16:07.450 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:07.450 { 00:16:07.450 "name": "Malloc1", 00:16:07.450 "aliases": [ 00:16:07.450 "a9a3729e-0993-464d-829c-6b91d544c631" 00:16:07.450 ], 00:16:07.450 "product_name": "Malloc disk", 00:16:07.450 "block_size": 512, 00:16:07.450 "num_blocks": 1048576, 00:16:07.450 "uuid": "a9a3729e-0993-464d-829c-6b91d544c631", 00:16:07.450 "assigned_rate_limits": { 00:16:07.450 "rw_ios_per_sec": 0, 00:16:07.450 "rw_mbytes_per_sec": 0, 00:16:07.450 "r_mbytes_per_sec": 0, 00:16:07.450 "w_mbytes_per_sec": 0 00:16:07.450 }, 00:16:07.450 "claimed": true, 00:16:07.450 "claim_type": "exclusive_write", 00:16:07.450 "zoned": false, 00:16:07.450 "supported_io_types": { 00:16:07.450 "read": true, 00:16:07.450 "write": true, 00:16:07.450 "unmap": true, 00:16:07.450 "flush": true, 00:16:07.450 "reset": true, 00:16:07.450 "nvme_admin": false, 00:16:07.450 "nvme_io": false, 00:16:07.450 "nvme_io_md": false, 00:16:07.450 "write_zeroes": true, 00:16:07.450 "zcopy": true, 00:16:07.450 "get_zone_info": false, 00:16:07.450 "zone_management": false, 00:16:07.450 "zone_append": false, 00:16:07.450 "compare": false, 00:16:07.450 "compare_and_write": false, 00:16:07.450 "abort": true, 00:16:07.450 "seek_hole": false, 00:16:07.450 "seek_data": false, 00:16:07.450 "copy": true, 00:16:07.450 "nvme_iov_md": false 00:16:07.450 }, 00:16:07.450 "memory_domains": [ 00:16:07.450 { 00:16:07.450 "dma_device_id": "system", 00:16:07.450 "dma_device_type": 1 00:16:07.450 }, 00:16:07.450 { 00:16:07.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.450 "dma_device_type": 2 00:16:07.450 } 00:16:07.450 ], 00:16:07.450 "driver_specific": {} 00:16:07.450 } 00:16:07.450 ]' 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:07.450 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:08.387 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:08.387 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:16:08.387 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.387 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:08.387 23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:16:10.293 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:10.293 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:10.293 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:10.553 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:10.812 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:11.071 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:12.008 ************************************ 00:16:12.008 START TEST filesystem_ext4 00:16:12.008 ************************************ 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:16:12.008 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:12.268 mke2fs 1.47.0 (5-Feb-2023) 00:16:12.268 Discarding device blocks: 0/522240 done 00:16:12.268 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:12.268 Filesystem UUID: 235c454a-7c1e-4484-ba55-9e16c30b0fb5 00:16:12.268 Superblock backups stored on blocks: 00:16:12.268 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:12.268 00:16:12.268 Allocating group tables: 0/64 done 00:16:12.268 Writing inode tables: 0/64 done 00:16:12.268 Creating journal (8192 blocks): done 00:16:13.464 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:16:13.464 00:16:13.464 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:13.464 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 274042 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:20.054 00:16:20.054 real 0m7.340s 00:16:20.054 user 0m0.028s 00:16:20.054 sys 0m0.116s 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:20.054 ************************************ 00:16:20.054 END TEST filesystem_ext4 00:16:20.054 ************************************ 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:20.054 ************************************ 00:16:20.054 START TEST filesystem_btrfs 00:16:20.054 ************************************ 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:20.054 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:20.054 btrfs-progs v6.8.1 00:16:20.054 See https://btrfs.readthedocs.io for more information. 00:16:20.054 00:16:20.054 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:20.054 NOTE: several default settings have changed in version 5.15, please make sure 00:16:20.054 this does not affect your deployments: 00:16:20.054 - DUP for metadata (-m dup) 00:16:20.054 - enabled no-holes (-O no-holes) 00:16:20.054 - enabled free-space-tree (-R free-space-tree) 00:16:20.054 00:16:20.054 Label: (null) 00:16:20.054 UUID: 2d9358b5-e136-48e3-ae00-366ca33f4777 00:16:20.054 Node size: 16384 00:16:20.054 Sector size: 4096 (CPU page size: 4096) 00:16:20.054 Filesystem size: 510.00MiB 00:16:20.054 Block group profiles: 00:16:20.054 Data: single 8.00MiB 00:16:20.054 Metadata: DUP 32.00MiB 00:16:20.054 System: DUP 8.00MiB 00:16:20.055 SSD detected: yes 00:16:20.055 Zoned device: no 00:16:20.055 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:20.055 Checksum: crc32c 00:16:20.055 Number of devices: 1 00:16:20.055 Devices: 00:16:20.055 ID SIZE PATH 00:16:20.055 1 510.00MiB /dev/nvme0n1p1 00:16:20.055 00:16:20.055 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:20.055 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 274042 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:20.623 00:16:20.623 real 0m1.109s 00:16:20.623 user 0m0.032s 00:16:20.623 sys 0m0.136s 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:20.623 ************************************ 00:16:20.623 END TEST filesystem_btrfs 00:16:20.623 ************************************ 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:20.623 ************************************ 00:16:20.623 START TEST filesystem_xfs 00:16:20.623 ************************************ 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:20.623 23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:20.882 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:20.882 = sectsz=512 attr=2, projid32bit=1 00:16:20.882 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:20.882 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:20.882 data = bsize=4096 blocks=130560, imaxpct=25 00:16:20.882 = sunit=0 swidth=0 blks 00:16:20.882 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:20.882 log =internal log bsize=4096 blocks=16384, version=2 00:16:20.882 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:20.882 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:21.819 Discarding blocks...Done. 00:16:21.819 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:21.819 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 274042 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:24.355 00:16:24.355 real 0m3.289s 00:16:24.355 user 0m0.022s 00:16:24.355 sys 0m0.120s 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:24.355 ************************************ 00:16:24.355 END TEST filesystem_xfs 00:16:24.355 ************************************ 00:16:24.355 23:57:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 274042 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 274042 ']' 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 274042 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.355 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274042 00:16:24.615 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.615 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.615 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274042' 00:16:24.615 killing process with pid 274042 00:16:24.615 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 274042 00:16:24.615 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 274042 00:16:24.874 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:24.874 00:16:24.874 real 0m18.130s 00:16:24.874 user 1m11.312s 00:16:24.874 sys 0m1.583s 00:16:24.874 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.874 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:24.874 ************************************ 00:16:24.874 END TEST nvmf_filesystem_no_in_capsule 00:16:24.874 ************************************ 00:16:24.874 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:16:24.874 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:24.874 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.874 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:24.874 ************************************ 00:16:24.874 START TEST nvmf_filesystem_in_capsule 00:16:24.874 ************************************ 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=277266 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 277266 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 277266 ']' 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.875 23:57:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:24.875 [2024-12-09 23:57:59.779097] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:16:24.875 [2024-12-09 23:57:59.779138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.134 [2024-12-09 23:57:59.858492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:25.135 [2024-12-09 23:57:59.897767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.135 [2024-12-09 23:57:59.897804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.135 [2024-12-09 23:57:59.897811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.135 [2024-12-09 23:57:59.897818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.135 [2024-12-09 23:57:59.897823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.135 [2024-12-09 23:57:59.899269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.135 [2024-12-09 23:57:59.899376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.135 [2024-12-09 23:57:59.899492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.135 [2024-12-09 23:57:59.899493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:25.135 [2024-12-09 23:58:00.049945] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.135 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:25.394 Malloc1 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:25.395 [2024-12-09 23:58:00.215324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:25.395 { 00:16:25.395 "name": "Malloc1", 00:16:25.395 "aliases": [ 00:16:25.395 "d4b2513b-5b21-4774-b18f-9fa714fef0c4" 00:16:25.395 ], 00:16:25.395 "product_name": "Malloc disk", 00:16:25.395 "block_size": 512, 00:16:25.395 "num_blocks": 1048576, 00:16:25.395 "uuid": "d4b2513b-5b21-4774-b18f-9fa714fef0c4", 00:16:25.395 "assigned_rate_limits": { 00:16:25.395 "rw_ios_per_sec": 0, 00:16:25.395 "rw_mbytes_per_sec": 0, 00:16:25.395 "r_mbytes_per_sec": 0, 00:16:25.395 "w_mbytes_per_sec": 0 00:16:25.395 }, 00:16:25.395 "claimed": true, 00:16:25.395 "claim_type": "exclusive_write", 00:16:25.395 "zoned": false, 00:16:25.395 "supported_io_types": { 00:16:25.395 "read": true, 00:16:25.395 "write": true, 00:16:25.395 "unmap": true, 00:16:25.395 "flush": true, 00:16:25.395 "reset": true, 00:16:25.395 "nvme_admin": false, 00:16:25.395 "nvme_io": false, 00:16:25.395 "nvme_io_md": false, 00:16:25.395 "write_zeroes": true, 00:16:25.395 "zcopy": true, 00:16:25.395 "get_zone_info": false, 00:16:25.395 "zone_management": false, 00:16:25.395 "zone_append": false, 00:16:25.395 "compare": false, 00:16:25.395 "compare_and_write": false, 00:16:25.395 "abort": true, 00:16:25.395 "seek_hole": false, 00:16:25.395 "seek_data": false, 00:16:25.395 "copy": true, 00:16:25.395 "nvme_iov_md": false 00:16:25.395 }, 00:16:25.395 "memory_domains": [ 00:16:25.395 { 00:16:25.395 "dma_device_id": "system", 00:16:25.395 "dma_device_type": 1 00:16:25.395 }, 00:16:25.395 { 00:16:25.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.395 "dma_device_type": 2 00:16:25.395 } 00:16:25.395 ], 00:16:25.395 "driver_specific": {} 00:16:25.395 } 00:16:25.395 ]' 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:25.395 23:58:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:26.770 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:26.770 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:16:26.770 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.770 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:26.770 23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:28.672 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:28.930 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:29.189 23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.125 ************************************ 00:16:30.125 START TEST filesystem_in_capsule_ext4 00:16:30.125 ************************************ 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:16:30.125 23:58:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:30.125 mke2fs 1.47.0 (5-Feb-2023) 00:16:30.125 Discarding device blocks: 0/522240 done 00:16:30.125 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:30.125 Filesystem UUID: 4cfcf703-a639-4a89-9ff8-511e201dca05 00:16:30.125 Superblock backups stored on blocks: 00:16:30.125 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:30.125 00:16:30.125 Allocating group tables: 0/64 done 00:16:30.125 Writing inode tables: 0/64 done 00:16:30.383 Creating journal (8192 blocks): done 00:16:30.949 Writing superblocks and filesystem accounting information: 0/64 done 00:16:30.949 00:16:30.949 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:30.949 23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:36.215 23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:36.215 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:36.215 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:36.215 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:36.215 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:36.215 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:36.215 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 277266 00:16:36.215 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:36.215 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:36.215 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:36.215 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:36.215 00:16:36.215 real 0m6.116s 00:16:36.215 user 0m0.022s 00:16:36.215 sys 0m0.075s 00:16:36.216 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.216 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:36.216 ************************************ 00:16:36.216 END TEST filesystem_in_capsule_ext4 00:16:36.216 ************************************ 00:16:36.216 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:36.216 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:36.216 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.216 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:36.475 ************************************ 00:16:36.475 START TEST filesystem_in_capsule_btrfs 00:16:36.475 ************************************ 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:36.475 btrfs-progs v6.8.1 00:16:36.475 See https://btrfs.readthedocs.io for more information. 00:16:36.475 00:16:36.475 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:36.475 NOTE: several default settings have changed in version 5.15, please make sure 00:16:36.475 this does not affect your deployments: 00:16:36.475 - DUP for metadata (-m dup) 00:16:36.475 - enabled no-holes (-O no-holes) 00:16:36.475 - enabled free-space-tree (-R free-space-tree) 00:16:36.475 00:16:36.475 Label: (null) 00:16:36.475 UUID: 5a4aa57b-44c6-41c9-bd60-a3db9b826d41 00:16:36.475 Node size: 16384 00:16:36.475 Sector size: 4096 (CPU page size: 4096) 00:16:36.475 Filesystem size: 510.00MiB 00:16:36.475 Block group profiles: 00:16:36.475 Data: single 8.00MiB 00:16:36.475 Metadata: DUP 32.00MiB 00:16:36.475 System: DUP 8.00MiB 00:16:36.475 SSD detected: yes 00:16:36.475 Zoned device: no 00:16:36.475 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:36.475 Checksum: crc32c 00:16:36.475 Number of devices: 1 00:16:36.475 Devices: 00:16:36.475 ID SIZE PATH 00:16:36.475 1 510.00MiB /dev/nvme0n1p1 00:16:36.475 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:36.475 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:36.734 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:36.734 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:36.734 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:36.734 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:36.734 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:36.734 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:36.734 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 277266 00:16:36.734 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:36.734 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:36.734 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:36.735 00:16:36.735 real 0m0.418s 00:16:36.735 user 0m0.023s 00:16:36.735 sys 0m0.116s 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:36.735 ************************************ 00:16:36.735 END TEST filesystem_in_capsule_btrfs 00:16:36.735 ************************************ 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:36.735 ************************************ 00:16:36.735 START TEST filesystem_in_capsule_xfs 00:16:36.735 ************************************ 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:36.735 23:58:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:37.672 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:37.672 = sectsz=512 attr=2, projid32bit=1 00:16:37.672 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:37.672 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:37.673 data = bsize=4096 blocks=130560, imaxpct=25 00:16:37.673 = sunit=0 swidth=0 blks 00:16:37.673 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:37.673 log =internal log bsize=4096 blocks=16384, version=2 00:16:37.673 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:37.673 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:38.610 Discarding blocks...Done. 00:16:38.610 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:38.610 23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 277266 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:41.147 00:16:41.147 real 0m4.141s 00:16:41.147 user 0m0.021s 00:16:41.147 sys 0m0.077s 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:41.147 ************************************ 00:16:41.147 END TEST filesystem_in_capsule_xfs 00:16:41.147 ************************************ 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 277266 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 277266 ']' 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 277266 00:16:41.147 23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:41.147 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.147 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277266 00:16:41.147 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.147 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.147 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277266' 00:16:41.147 killing process with pid 277266 00:16:41.147 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 277266 00:16:41.147 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 277266 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:41.717 00:16:41.717 real 0m16.642s 00:16:41.717 user 1m5.437s 00:16:41.717 sys 0m1.402s 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:41.717 ************************************ 00:16:41.717 END TEST nvmf_filesystem_in_capsule 00:16:41.717 ************************************ 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:41.717 rmmod nvme_tcp 00:16:41.717 rmmod nvme_fabrics 00:16:41.717 rmmod nvme_keyring 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.717 23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.627 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:43.627 00:16:43.627 real 0m43.464s 00:16:43.627 user 2m18.765s 00:16:43.627 sys 0m7.692s 00:16:43.627 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.627 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:43.627 ************************************ 00:16:43.627 END TEST nvmf_filesystem 00:16:43.627 ************************************ 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:43.886 ************************************ 00:16:43.886 START TEST nvmf_target_discovery 00:16:43.886 ************************************ 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:43.886 * Looking for test storage... 00:16:43.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:43.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.886 --rc genhtml_branch_coverage=1 00:16:43.886 --rc genhtml_function_coverage=1 00:16:43.886 --rc genhtml_legend=1 00:16:43.886 --rc geninfo_all_blocks=1 00:16:43.886 --rc geninfo_unexecuted_blocks=1 00:16:43.886 00:16:43.886 ' 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:43.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.886 --rc genhtml_branch_coverage=1 00:16:43.886 --rc genhtml_function_coverage=1 00:16:43.886 --rc genhtml_legend=1 00:16:43.886 --rc geninfo_all_blocks=1 00:16:43.886 --rc geninfo_unexecuted_blocks=1 00:16:43.886 00:16:43.886 ' 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:43.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.886 --rc genhtml_branch_coverage=1 00:16:43.886 --rc genhtml_function_coverage=1 00:16:43.886 --rc genhtml_legend=1 00:16:43.886 --rc geninfo_all_blocks=1 00:16:43.886 --rc geninfo_unexecuted_blocks=1 00:16:43.886 00:16:43.886 ' 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:43.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.886 --rc genhtml_branch_coverage=1 00:16:43.886 --rc genhtml_function_coverage=1 00:16:43.886 --rc genhtml_legend=1 00:16:43.886 --rc geninfo_all_blocks=1 00:16:43.886 --rc geninfo_unexecuted_blocks=1 00:16:43.886 00:16:43.886 ' 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:16:43.886 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:16:43.887 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:44.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:44.146 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:44.147 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.147 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:44.147 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:44.147 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:44.147 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.147 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.147 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.147 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:44.147 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:44.147 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:16:44.147 23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.734 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:50.734 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:16:50.734 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:50.734 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:50.734 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:50.734 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:50.735 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:50.735 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:50.735 Found net devices under 0000:86:00.0: cvl_0_0 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:50.735 Found net devices under 0000:86:00.1: cvl_0_1 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:50.735 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:50.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:16:50.735 00:16:50.735 --- 10.0.0.2 ping statistics --- 00:16:50.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.736 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:16:50.736 00:16:50.736 --- 10.0.0.1 ping statistics --- 00:16:50.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.736 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=283787 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 283787 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 283787 ']' 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.736 23:58:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 [2024-12-09 23:58:24.845151] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:16:50.736 [2024-12-09 23:58:24.845200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.736 [2024-12-09 23:58:24.920917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:50.736 [2024-12-09 23:58:24.962620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.736 [2024-12-09 23:58:24.962659] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.736 [2024-12-09 23:58:24.962667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.736 [2024-12-09 23:58:24.962673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.736 [2024-12-09 23:58:24.962678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.736 [2024-12-09 23:58:24.964232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.736 [2024-12-09 23:58:24.964337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.736 [2024-12-09 23:58:24.964446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.736 [2024-12-09 23:58:24.964447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 [2024-12-09 23:58:25.110813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 Null1 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 [2024-12-09 23:58:25.176304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 Null2 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 Null3 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.736 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 Null4 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:50.737 00:16:50.737 Discovery Log Number of Records 6, Generation counter 6 00:16:50.737 =====Discovery Log Entry 0====== 00:16:50.737 trtype: tcp 00:16:50.737 adrfam: ipv4 00:16:50.737 subtype: current discovery subsystem 00:16:50.737 treq: not required 00:16:50.737 portid: 0 00:16:50.737 trsvcid: 4420 00:16:50.737 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:50.737 traddr: 10.0.0.2 00:16:50.737 eflags: explicit discovery connections, duplicate discovery information 00:16:50.737 sectype: none 00:16:50.737 =====Discovery Log Entry 1====== 00:16:50.737 trtype: tcp 00:16:50.737 adrfam: ipv4 00:16:50.737 subtype: nvme subsystem 00:16:50.737 treq: not required 00:16:50.737 portid: 0 00:16:50.737 trsvcid: 4420 00:16:50.737 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:50.737 traddr: 10.0.0.2 00:16:50.737 eflags: none 00:16:50.737 sectype: none 00:16:50.737 =====Discovery Log Entry 2====== 00:16:50.737 trtype: tcp 00:16:50.737 adrfam: ipv4 00:16:50.737 subtype: nvme subsystem 00:16:50.737 treq: not required 00:16:50.737 portid: 0 00:16:50.737 trsvcid: 4420 00:16:50.737 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:50.737 traddr: 10.0.0.2 00:16:50.737 eflags: none 00:16:50.737 sectype: none 00:16:50.737 =====Discovery Log Entry 3====== 00:16:50.737 trtype: tcp 00:16:50.737 adrfam: ipv4 00:16:50.737 subtype: nvme subsystem 00:16:50.737 treq: not required 00:16:50.737 portid: 0 00:16:50.737 trsvcid: 4420 00:16:50.737 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:50.737 traddr: 10.0.0.2 00:16:50.737 eflags: none 00:16:50.737 sectype: none 00:16:50.737 =====Discovery Log Entry 4====== 00:16:50.737 trtype: tcp 00:16:50.737 adrfam: ipv4 00:16:50.737 subtype: nvme subsystem 00:16:50.737 treq: not required 00:16:50.737 portid: 0 00:16:50.737 trsvcid: 4420 00:16:50.737 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:50.737 traddr: 10.0.0.2 00:16:50.737 eflags: none 00:16:50.737 sectype: none 00:16:50.737 =====Discovery Log Entry 5====== 00:16:50.737 trtype: tcp 00:16:50.737 adrfam: ipv4 00:16:50.737 subtype: discovery subsystem referral 00:16:50.737 treq: not required 00:16:50.737 portid: 0 00:16:50.737 trsvcid: 4430 00:16:50.737 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:50.737 traddr: 10.0.0.2 00:16:50.737 eflags: none 00:16:50.737 sectype: none 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:50.737 Perform nvmf subsystem discovery via RPC 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.737 [ 00:16:50.737 { 00:16:50.737 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:50.737 "subtype": "Discovery", 00:16:50.737 "listen_addresses": [ 00:16:50.737 { 00:16:50.737 "trtype": "TCP", 00:16:50.737 "adrfam": "IPv4", 00:16:50.737 "traddr": "10.0.0.2", 00:16:50.737 "trsvcid": "4420" 00:16:50.737 } 00:16:50.737 ], 00:16:50.737 "allow_any_host": true, 00:16:50.737 "hosts": [] 00:16:50.737 }, 00:16:50.737 { 00:16:50.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.737 "subtype": "NVMe", 00:16:50.737 "listen_addresses": [ 00:16:50.737 { 00:16:50.737 "trtype": "TCP", 00:16:50.737 "adrfam": "IPv4", 00:16:50.737 "traddr": "10.0.0.2", 00:16:50.737 "trsvcid": "4420" 00:16:50.737 } 00:16:50.737 ], 00:16:50.737 "allow_any_host": true, 00:16:50.737 "hosts": [], 00:16:50.737 "serial_number": "SPDK00000000000001", 00:16:50.737 "model_number": "SPDK bdev Controller", 00:16:50.737 "max_namespaces": 32, 00:16:50.737 "min_cntlid": 1, 00:16:50.737 "max_cntlid": 65519, 00:16:50.737 "namespaces": [ 00:16:50.737 { 00:16:50.737 "nsid": 1, 00:16:50.737 "bdev_name": "Null1", 00:16:50.737 "name": "Null1", 00:16:50.737 "nguid": "D73D5ED0FCE64F13B7F2EADE27FA8251", 00:16:50.737 "uuid": "d73d5ed0-fce6-4f13-b7f2-eade27fa8251" 00:16:50.737 } 00:16:50.737 ] 00:16:50.737 }, 00:16:50.737 { 00:16:50.737 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:50.737 "subtype": "NVMe", 00:16:50.737 "listen_addresses": [ 00:16:50.737 { 00:16:50.737 "trtype": "TCP", 00:16:50.737 "adrfam": "IPv4", 00:16:50.737 "traddr": "10.0.0.2", 00:16:50.737 "trsvcid": "4420" 00:16:50.737 } 00:16:50.737 ], 00:16:50.737 "allow_any_host": true, 00:16:50.737 "hosts": [], 00:16:50.737 "serial_number": "SPDK00000000000002", 00:16:50.737 "model_number": "SPDK bdev Controller", 00:16:50.737 "max_namespaces": 32, 00:16:50.737 "min_cntlid": 1, 00:16:50.737 "max_cntlid": 65519, 00:16:50.737 "namespaces": [ 00:16:50.737 { 00:16:50.737 "nsid": 1, 00:16:50.737 "bdev_name": "Null2", 00:16:50.737 "name": "Null2", 00:16:50.737 "nguid": "059CED5830AD4034A9C3AC824D65542A", 00:16:50.737 "uuid": "059ced58-30ad-4034-a9c3-ac824d65542a" 00:16:50.737 } 00:16:50.737 ] 00:16:50.737 }, 00:16:50.737 { 00:16:50.737 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:50.737 "subtype": "NVMe", 00:16:50.737 "listen_addresses": [ 00:16:50.737 { 00:16:50.737 "trtype": "TCP", 00:16:50.737 "adrfam": "IPv4", 00:16:50.737 "traddr": "10.0.0.2", 00:16:50.737 "trsvcid": "4420" 00:16:50.737 } 00:16:50.737 ], 00:16:50.737 "allow_any_host": true, 00:16:50.737 "hosts": [], 00:16:50.737 "serial_number": "SPDK00000000000003", 00:16:50.737 "model_number": "SPDK bdev Controller", 00:16:50.737 "max_namespaces": 32, 00:16:50.737 "min_cntlid": 1, 00:16:50.737 "max_cntlid": 65519, 00:16:50.737 "namespaces": [ 00:16:50.737 { 00:16:50.737 "nsid": 1, 00:16:50.737 "bdev_name": "Null3", 00:16:50.737 "name": "Null3", 00:16:50.737 "nguid": "4A1DB427344B47BAB4D06DF23D50DA17", 00:16:50.737 "uuid": "4a1db427-344b-47ba-b4d0-6df23d50da17" 00:16:50.737 } 00:16:50.737 ] 00:16:50.737 }, 00:16:50.737 { 00:16:50.737 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:50.737 "subtype": "NVMe", 00:16:50.737 "listen_addresses": [ 00:16:50.737 { 00:16:50.737 "trtype": "TCP", 00:16:50.737 "adrfam": "IPv4", 00:16:50.737 "traddr": "10.0.0.2", 00:16:50.737 "trsvcid": "4420" 00:16:50.737 } 00:16:50.737 ], 00:16:50.737 "allow_any_host": true, 00:16:50.737 "hosts": [], 00:16:50.737 "serial_number": "SPDK00000000000004", 00:16:50.737 "model_number": "SPDK bdev Controller", 00:16:50.737 "max_namespaces": 32, 00:16:50.737 "min_cntlid": 1, 00:16:50.737 "max_cntlid": 65519, 00:16:50.737 "namespaces": [ 00:16:50.737 { 00:16:50.737 "nsid": 1, 00:16:50.737 "bdev_name": "Null4", 00:16:50.737 "name": "Null4", 00:16:50.737 "nguid": "C5BDC68393614758A120131D11B4A6BF", 00:16:50.737 "uuid": "c5bdc683-9361-4758-a120-131d11b4a6bf" 00:16:50.737 } 00:16:50.737 ] 00:16:50.737 } 00:16:50.737 ] 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.737 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:50.738 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:50.738 rmmod nvme_tcp 00:16:50.738 rmmod nvme_fabrics 00:16:50.738 rmmod nvme_keyring 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 283787 ']' 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 283787 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 283787 ']' 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 283787 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283787 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.998 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283787' 00:16:50.998 killing process with pid 283787 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 283787 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 283787 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.999 23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.538 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:53.538 00:16:53.538 real 0m9.348s 00:16:53.538 user 0m5.589s 00:16:53.538 sys 0m4.828s 00:16:53.538 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.538 23:58:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:53.538 ************************************ 00:16:53.538 END TEST nvmf_target_discovery 00:16:53.538 ************************************ 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:53.538 ************************************ 00:16:53.538 START TEST nvmf_referrals 00:16:53.538 ************************************ 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:53.538 * Looking for test storage... 00:16:53.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:53.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.538 --rc genhtml_branch_coverage=1 00:16:53.538 --rc genhtml_function_coverage=1 00:16:53.538 --rc genhtml_legend=1 00:16:53.538 --rc geninfo_all_blocks=1 00:16:53.538 --rc geninfo_unexecuted_blocks=1 00:16:53.538 00:16:53.538 ' 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:53.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.538 --rc genhtml_branch_coverage=1 00:16:53.538 --rc genhtml_function_coverage=1 00:16:53.538 --rc genhtml_legend=1 00:16:53.538 --rc geninfo_all_blocks=1 00:16:53.538 --rc geninfo_unexecuted_blocks=1 00:16:53.538 00:16:53.538 ' 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:53.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.538 --rc genhtml_branch_coverage=1 00:16:53.538 --rc genhtml_function_coverage=1 00:16:53.538 --rc genhtml_legend=1 00:16:53.538 --rc geninfo_all_blocks=1 00:16:53.538 --rc geninfo_unexecuted_blocks=1 00:16:53.538 00:16:53.538 ' 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:53.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:53.538 --rc genhtml_branch_coverage=1 00:16:53.538 --rc genhtml_function_coverage=1 00:16:53.538 --rc genhtml_legend=1 00:16:53.538 --rc geninfo_all_blocks=1 00:16:53.538 --rc geninfo_unexecuted_blocks=1 00:16:53.538 00:16:53.538 ' 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.538 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:53.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:16:53.539 23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.110 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:00.111 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:00.111 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:00.111 Found net devices under 0000:86:00.0: cvl_0_0 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:00.111 Found net devices under 0000:86:00.1: cvl_0_1 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:00.111 23:58:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:00.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:17:00.111 00:17:00.111 --- 10.0.0.2 ping statistics --- 00:17:00.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.111 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:00.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:17:00.111 00:17:00.111 --- 10.0.0.1 ping statistics --- 00:17:00.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.111 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=287565 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 287565 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 287565 ']' 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:00.111 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.112 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.112 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.112 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.112 23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.112 [2024-12-09 23:58:34.311347] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:17:00.112 [2024-12-09 23:58:34.311391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.112 [2024-12-09 23:58:34.390268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.112 [2024-12-09 23:58:34.429459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.112 [2024-12-09 23:58:34.429499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.112 [2024-12-09 23:58:34.429506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.112 [2024-12-09 23:58:34.429513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.112 [2024-12-09 23:58:34.429518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.112 [2024-12-09 23:58:34.431112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.112 [2024-12-09 23:58:34.431222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.112 [2024-12-09 23:58:34.431255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.112 [2024-12-09 23:58:34.431257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.379 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.379 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:17:00.379 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:00.379 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:00.379 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.379 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.379 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:00.379 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 [2024-12-09 23:58:35.187427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 [2024-12-09 23:58:35.209310] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.380 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.640 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:00.899 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:01.158 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.158 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:17:01.158 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:01.158 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:17:01.158 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:01.158 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:01.158 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:01.158 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:01.158 23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:01.158 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:17:01.158 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:17:01.158 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:17:01.158 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:17:01.158 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:01.158 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:01.159 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:01.418 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:01.418 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:17:01.418 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:17:01.418 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:01.418 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:01.418 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:01.677 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:01.937 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:02.196 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:02.196 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:17:02.196 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.196 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:02.196 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.196 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:02.196 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:17:02.196 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.196 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:02.196 23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.196 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:17:02.196 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:17:02.196 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:02.196 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:02.196 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:02.196 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:02.196 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:02.455 rmmod nvme_tcp 00:17:02.455 rmmod nvme_fabrics 00:17:02.455 rmmod nvme_keyring 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 287565 ']' 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 287565 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 287565 ']' 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 287565 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287565 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287565' 00:17:02.455 killing process with pid 287565 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 287565 00:17:02.455 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 287565 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.715 23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:05.251 00:17:05.251 real 0m11.560s 00:17:05.251 user 0m15.005s 00:17:05.251 sys 0m5.316s 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:05.251 ************************************ 00:17:05.251 END TEST nvmf_referrals 00:17:05.251 ************************************ 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:05.251 ************************************ 00:17:05.251 START TEST nvmf_connect_disconnect 00:17:05.251 ************************************ 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:05.251 * Looking for test storage... 00:17:05.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:05.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.251 --rc genhtml_branch_coverage=1 00:17:05.251 --rc genhtml_function_coverage=1 00:17:05.251 --rc genhtml_legend=1 00:17:05.251 --rc geninfo_all_blocks=1 00:17:05.251 --rc geninfo_unexecuted_blocks=1 00:17:05.251 00:17:05.251 ' 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:05.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.251 --rc genhtml_branch_coverage=1 00:17:05.251 --rc genhtml_function_coverage=1 00:17:05.251 --rc genhtml_legend=1 00:17:05.251 --rc geninfo_all_blocks=1 00:17:05.251 --rc geninfo_unexecuted_blocks=1 00:17:05.251 00:17:05.251 ' 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:05.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.251 --rc genhtml_branch_coverage=1 00:17:05.251 --rc genhtml_function_coverage=1 00:17:05.251 --rc genhtml_legend=1 00:17:05.251 --rc geninfo_all_blocks=1 00:17:05.251 --rc geninfo_unexecuted_blocks=1 00:17:05.251 00:17:05.251 ' 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:05.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.251 --rc genhtml_branch_coverage=1 00:17:05.251 --rc genhtml_function_coverage=1 00:17:05.251 --rc genhtml_legend=1 00:17:05.251 --rc geninfo_all_blocks=1 00:17:05.251 --rc geninfo_unexecuted_blocks=1 00:17:05.251 00:17:05.251 ' 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.251 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:17:05.252 23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:11.828 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:11.828 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.828 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:11.829 Found net devices under 0000:86:00.0: cvl_0_0 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:11.829 Found net devices under 0000:86:00.1: cvl_0_1 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:11.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:17:11.829 00:17:11.829 --- 10.0.0.2 ping statistics --- 00:17:11.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.829 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:17:11.829 00:17:11.829 --- 10.0.0.1 ping statistics --- 00:17:11.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.829 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=291647 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 291647 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 291647 ']' 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.829 23:58:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.829 [2024-12-09 23:58:45.879029] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:17:11.829 [2024-12-09 23:58:45.879080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.829 [2024-12-09 23:58:45.960472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:11.829 [2024-12-09 23:58:46.003890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.829 [2024-12-09 23:58:46.003925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.829 [2024-12-09 23:58:46.003934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.829 [2024-12-09 23:58:46.003940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.829 [2024-12-09 23:58:46.003945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.829 [2024-12-09 23:58:46.005372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.829 [2024-12-09 23:58:46.005407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.829 [2024-12-09 23:58:46.005421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.829 [2024-12-09 23:58:46.005426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.829 [2024-12-09 23:58:46.151937] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.829 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.830 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:11.830 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.830 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.830 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.830 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.830 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.830 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.830 [2024-12-09 23:58:46.221038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.830 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.830 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:17:11.830 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:17:11.830 23:58:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:17:15.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.528 rmmod nvme_tcp 00:17:27.528 rmmod nvme_fabrics 00:17:27.528 rmmod nvme_keyring 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 291647 ']' 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 291647 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 291647 ']' 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 291647 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291647 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291647' 00:17:27.528 killing process with pid 291647 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 291647 00:17:27.528 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 291647 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.788 23:59:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.326 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:30.326 00:17:30.326 real 0m25.029s 00:17:30.326 user 1m7.628s 00:17:30.326 sys 0m5.866s 00:17:30.326 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.326 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:30.326 ************************************ 00:17:30.326 END TEST nvmf_connect_disconnect 00:17:30.326 ************************************ 00:17:30.326 23:59:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:30.326 23:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:30.326 23:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.326 23:59:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:30.326 ************************************ 00:17:30.326 START TEST nvmf_multitarget 00:17:30.326 ************************************ 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:30.327 * Looking for test storage... 00:17:30.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:30.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.327 --rc genhtml_branch_coverage=1 00:17:30.327 --rc genhtml_function_coverage=1 00:17:30.327 --rc genhtml_legend=1 00:17:30.327 --rc geninfo_all_blocks=1 00:17:30.327 --rc geninfo_unexecuted_blocks=1 00:17:30.327 00:17:30.327 ' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:30.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.327 --rc genhtml_branch_coverage=1 00:17:30.327 --rc genhtml_function_coverage=1 00:17:30.327 --rc genhtml_legend=1 00:17:30.327 --rc geninfo_all_blocks=1 00:17:30.327 --rc geninfo_unexecuted_blocks=1 00:17:30.327 00:17:30.327 ' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:30.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.327 --rc genhtml_branch_coverage=1 00:17:30.327 --rc genhtml_function_coverage=1 00:17:30.327 --rc genhtml_legend=1 00:17:30.327 --rc geninfo_all_blocks=1 00:17:30.327 --rc geninfo_unexecuted_blocks=1 00:17:30.327 00:17:30.327 ' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:30.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.327 --rc genhtml_branch_coverage=1 00:17:30.327 --rc genhtml_function_coverage=1 00:17:30.327 --rc genhtml_legend=1 00:17:30.327 --rc geninfo_all_blocks=1 00:17:30.327 --rc geninfo_unexecuted_blocks=1 00:17:30.327 00:17:30.327 ' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.327 23:59:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.327 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:30.327 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:30.327 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:30.327 23:59:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:36.900 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.900 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:36.900 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:36.900 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:36.900 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:36.900 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:36.900 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:36.900 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:36.900 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:36.900 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:36.900 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:36.901 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:36.901 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:36.901 Found net devices under 0000:86:00.0: cvl_0_0 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:36.901 Found net devices under 0000:86:00.1: cvl_0_1 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:36.901 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:36.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:17:36.902 00:17:36.902 --- 10.0.0.2 ping statistics --- 00:17:36.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.902 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:17:36.902 00:17:36.902 --- 10.0.0.1 ping statistics --- 00:17:36.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.902 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=298038 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 298038 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 298038 ']' 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.902 23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:36.902 [2024-12-09 23:59:11.017266] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:17:36.902 [2024-12-09 23:59:11.017311] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.902 [2024-12-09 23:59:11.097111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.902 [2024-12-09 23:59:11.136732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.902 [2024-12-09 23:59:11.136771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.902 [2024-12-09 23:59:11.136779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.902 [2024-12-09 23:59:11.136785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.902 [2024-12-09 23:59:11.136790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.902 [2024-12-09 23:59:11.138390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.902 [2024-12-09 23:59:11.138505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.902 [2024-12-09 23:59:11.138590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.902 [2024-12-09 23:59:11.138591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:36.902 "nvmf_tgt_1" 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:36.902 "nvmf_tgt_2" 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:36.902 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:36.902 true 00:17:36.903 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:37.162 true 00:17:37.162 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:37.162 23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:37.162 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:37.162 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:37.162 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:37.162 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:37.162 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:37.162 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:37.162 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:37.162 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:37.162 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:37.162 rmmod nvme_tcp 00:17:37.162 rmmod nvme_fabrics 00:17:37.162 rmmod nvme_keyring 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 298038 ']' 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 298038 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 298038 ']' 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 298038 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 298038 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 298038' 00:17:37.421 killing process with pid 298038 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 298038 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 298038 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:37.421 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.422 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.422 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.422 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:37.422 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.422 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.422 23:59:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.960 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:39.960 00:17:39.961 real 0m9.632s 00:17:39.961 user 0m7.274s 00:17:39.961 sys 0m4.913s 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:39.961 ************************************ 00:17:39.961 END TEST nvmf_multitarget 00:17:39.961 ************************************ 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:39.961 ************************************ 00:17:39.961 START TEST nvmf_rpc 00:17:39.961 ************************************ 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:39.961 * Looking for test storage... 00:17:39.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:39.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.961 --rc genhtml_branch_coverage=1 00:17:39.961 --rc genhtml_function_coverage=1 00:17:39.961 --rc genhtml_legend=1 00:17:39.961 --rc geninfo_all_blocks=1 00:17:39.961 --rc geninfo_unexecuted_blocks=1 00:17:39.961 00:17:39.961 ' 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:39.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.961 --rc genhtml_branch_coverage=1 00:17:39.961 --rc genhtml_function_coverage=1 00:17:39.961 --rc genhtml_legend=1 00:17:39.961 --rc geninfo_all_blocks=1 00:17:39.961 --rc geninfo_unexecuted_blocks=1 00:17:39.961 00:17:39.961 ' 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:39.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.961 --rc genhtml_branch_coverage=1 00:17:39.961 --rc genhtml_function_coverage=1 00:17:39.961 --rc genhtml_legend=1 00:17:39.961 --rc geninfo_all_blocks=1 00:17:39.961 --rc geninfo_unexecuted_blocks=1 00:17:39.961 00:17:39.961 ' 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:39.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:39.961 --rc genhtml_branch_coverage=1 00:17:39.961 --rc genhtml_function_coverage=1 00:17:39.961 --rc genhtml_legend=1 00:17:39.961 --rc geninfo_all_blocks=1 00:17:39.961 --rc geninfo_unexecuted_blocks=1 00:17:39.961 00:17:39.961 ' 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.961 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:39.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:39.962 23:59:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:46.538 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:46.538 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:46.538 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:46.539 Found net devices under 0000:86:00.0: cvl_0_0 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:46.539 Found net devices under 0000:86:00.1: cvl_0_1 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:46.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:17:46.539 00:17:46.539 --- 10.0.0.2 ping statistics --- 00:17:46.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.539 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:17:46.539 00:17:46.539 --- 10.0.0.1 ping statistics --- 00:17:46.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.539 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=301762 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 301762 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 301762 ']' 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.539 [2024-12-09 23:59:20.703915] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:17:46.539 [2024-12-09 23:59:20.703958] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.539 [2024-12-09 23:59:20.785239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.539 [2024-12-09 23:59:20.826701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.539 [2024-12-09 23:59:20.826738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.539 [2024-12-09 23:59:20.826745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.539 [2024-12-09 23:59:20.826751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.539 [2024-12-09 23:59:20.826759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.539 [2024-12-09 23:59:20.828297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.539 [2024-12-09 23:59:20.828407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.539 [2024-12-09 23:59:20.828518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.539 [2024-12-09 23:59:20.828519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.539 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:46.539 "tick_rate": 2300000000, 00:17:46.539 "poll_groups": [ 00:17:46.539 { 00:17:46.539 "name": "nvmf_tgt_poll_group_000", 00:17:46.539 "admin_qpairs": 0, 00:17:46.539 "io_qpairs": 0, 00:17:46.539 "current_admin_qpairs": 0, 00:17:46.539 "current_io_qpairs": 0, 00:17:46.539 "pending_bdev_io": 0, 00:17:46.539 "completed_nvme_io": 0, 00:17:46.539 "transports": [] 00:17:46.539 }, 00:17:46.539 { 00:17:46.539 "name": "nvmf_tgt_poll_group_001", 00:17:46.539 "admin_qpairs": 0, 00:17:46.539 "io_qpairs": 0, 00:17:46.539 "current_admin_qpairs": 0, 00:17:46.539 "current_io_qpairs": 0, 00:17:46.539 "pending_bdev_io": 0, 00:17:46.539 "completed_nvme_io": 0, 00:17:46.539 "transports": [] 00:17:46.539 }, 00:17:46.539 { 00:17:46.539 "name": "nvmf_tgt_poll_group_002", 00:17:46.539 "admin_qpairs": 0, 00:17:46.540 "io_qpairs": 0, 00:17:46.540 "current_admin_qpairs": 0, 00:17:46.540 "current_io_qpairs": 0, 00:17:46.540 "pending_bdev_io": 0, 00:17:46.540 "completed_nvme_io": 0, 00:17:46.540 "transports": [] 00:17:46.540 }, 00:17:46.540 { 00:17:46.540 "name": "nvmf_tgt_poll_group_003", 00:17:46.540 "admin_qpairs": 0, 00:17:46.540 "io_qpairs": 0, 00:17:46.540 "current_admin_qpairs": 0, 00:17:46.540 "current_io_qpairs": 0, 00:17:46.540 "pending_bdev_io": 0, 00:17:46.540 "completed_nvme_io": 0, 00:17:46.540 "transports": [] 00:17:46.540 } 00:17:46.540 ] 00:17:46.540 }' 00:17:46.540 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:46.540 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:46.540 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:46.540 23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.540 [2024-12-09 23:59:21.073854] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:46.540 "tick_rate": 2300000000, 00:17:46.540 "poll_groups": [ 00:17:46.540 { 00:17:46.540 "name": "nvmf_tgt_poll_group_000", 00:17:46.540 "admin_qpairs": 0, 00:17:46.540 "io_qpairs": 0, 00:17:46.540 "current_admin_qpairs": 0, 00:17:46.540 "current_io_qpairs": 0, 00:17:46.540 "pending_bdev_io": 0, 00:17:46.540 "completed_nvme_io": 0, 00:17:46.540 "transports": [ 00:17:46.540 { 00:17:46.540 "trtype": "TCP" 00:17:46.540 } 00:17:46.540 ] 00:17:46.540 }, 00:17:46.540 { 00:17:46.540 "name": "nvmf_tgt_poll_group_001", 00:17:46.540 "admin_qpairs": 0, 00:17:46.540 "io_qpairs": 0, 00:17:46.540 "current_admin_qpairs": 0, 00:17:46.540 "current_io_qpairs": 0, 00:17:46.540 "pending_bdev_io": 0, 00:17:46.540 "completed_nvme_io": 0, 00:17:46.540 "transports": [ 00:17:46.540 { 00:17:46.540 "trtype": "TCP" 00:17:46.540 } 00:17:46.540 ] 00:17:46.540 }, 00:17:46.540 { 00:17:46.540 "name": "nvmf_tgt_poll_group_002", 00:17:46.540 "admin_qpairs": 0, 00:17:46.540 "io_qpairs": 0, 00:17:46.540 "current_admin_qpairs": 0, 00:17:46.540 "current_io_qpairs": 0, 00:17:46.540 "pending_bdev_io": 0, 00:17:46.540 "completed_nvme_io": 0, 00:17:46.540 "transports": [ 00:17:46.540 { 00:17:46.540 "trtype": "TCP" 00:17:46.540 } 00:17:46.540 ] 00:17:46.540 }, 00:17:46.540 { 00:17:46.540 "name": "nvmf_tgt_poll_group_003", 00:17:46.540 "admin_qpairs": 0, 00:17:46.540 "io_qpairs": 0, 00:17:46.540 "current_admin_qpairs": 0, 00:17:46.540 "current_io_qpairs": 0, 00:17:46.540 "pending_bdev_io": 0, 00:17:46.540 "completed_nvme_io": 0, 00:17:46.540 "transports": [ 00:17:46.540 { 00:17:46.540 "trtype": "TCP" 00:17:46.540 } 00:17:46.540 ] 00:17:46.540 } 00:17:46.540 ] 00:17:46.540 }' 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.540 Malloc1 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.540 [2024-12-09 23:59:21.253095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:17:46.540 [2024-12-09 23:59:21.281733] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:17:46.540 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:46.540 could not add new controller: failed to write to nvme-fabrics device 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.540 23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:47.918 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:47.918 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:47.918 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:47.918 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:47.918 23:59:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:49.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:49.822 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:49.823 [2024-12-09 23:59:24.646718] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:17:49.823 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:49.823 could not add new controller: failed to write to nvme-fabrics device 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.823 23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:51.200 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:51.200 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:51.200 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:51.200 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:51.200 23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:53.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.104 23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.104 [2024-12-09 23:59:28.010885] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.104 23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:54.483 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:54.483 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:54.483 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:54.483 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:54.483 23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:56.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:56.387 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.388 [2024-12-09 23:59:31.312144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.388 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.647 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.647 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:56.647 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.647 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.647 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.647 23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:58.025 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:58.025 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:58.025 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.025 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:58.025 23:59:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:59.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.932 [2024-12-09 23:59:34.723620] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.932 23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:01.311 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:01.311 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:01.311 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.311 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:01.311 23:59:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:03.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.218 23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.218 [2024-12-09 23:59:38.035860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.218 23:59:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:04.597 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:04.597 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:04.597 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.597 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:04.597 23:59:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.501 [2024-12-09 23:59:41.334913] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.501 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.502 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.502 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:06.502 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.502 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:06.502 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.502 23:59:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:07.879 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:07.880 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:07.880 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.880 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:07.880 23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.786 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.046 [2024-12-09 23:59:44.744438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.046 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 [2024-12-09 23:59:44.796575] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 [2024-12-09 23:59:44.844702] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 [2024-12-09 23:59:44.892849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 [2024-12-09 23:59:44.945043] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.047 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.311 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.311 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:10.311 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.311 23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:10.311 "tick_rate": 2300000000, 00:18:10.311 "poll_groups": [ 00:18:10.311 { 00:18:10.311 "name": "nvmf_tgt_poll_group_000", 00:18:10.311 "admin_qpairs": 2, 00:18:10.311 "io_qpairs": 168, 00:18:10.311 "current_admin_qpairs": 0, 00:18:10.311 "current_io_qpairs": 0, 00:18:10.311 "pending_bdev_io": 0, 00:18:10.311 "completed_nvme_io": 267, 00:18:10.311 "transports": [ 00:18:10.311 { 00:18:10.311 "trtype": "TCP" 00:18:10.311 } 00:18:10.311 ] 00:18:10.311 }, 00:18:10.311 { 00:18:10.311 "name": "nvmf_tgt_poll_group_001", 00:18:10.311 "admin_qpairs": 2, 00:18:10.311 "io_qpairs": 168, 00:18:10.311 "current_admin_qpairs": 0, 00:18:10.311 "current_io_qpairs": 0, 00:18:10.311 "pending_bdev_io": 0, 00:18:10.311 "completed_nvme_io": 268, 00:18:10.311 "transports": [ 00:18:10.311 { 00:18:10.311 "trtype": "TCP" 00:18:10.311 } 00:18:10.311 ] 00:18:10.311 }, 00:18:10.311 { 00:18:10.311 "name": "nvmf_tgt_poll_group_002", 00:18:10.311 "admin_qpairs": 1, 00:18:10.311 "io_qpairs": 168, 00:18:10.311 "current_admin_qpairs": 0, 00:18:10.311 "current_io_qpairs": 0, 00:18:10.311 "pending_bdev_io": 0, 00:18:10.311 "completed_nvme_io": 268, 00:18:10.311 "transports": [ 00:18:10.311 { 00:18:10.311 "trtype": "TCP" 00:18:10.311 } 00:18:10.311 ] 00:18:10.311 }, 00:18:10.311 { 00:18:10.311 "name": "nvmf_tgt_poll_group_003", 00:18:10.311 "admin_qpairs": 2, 00:18:10.311 "io_qpairs": 168, 00:18:10.311 "current_admin_qpairs": 0, 00:18:10.311 "current_io_qpairs": 0, 00:18:10.311 "pending_bdev_io": 0, 00:18:10.311 "completed_nvme_io": 219, 00:18:10.311 "transports": [ 00:18:10.311 { 00:18:10.311 "trtype": "TCP" 00:18:10.311 } 00:18:10.311 ] 00:18:10.311 } 00:18:10.311 ] 00:18:10.311 }' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:10.311 rmmod nvme_tcp 00:18:10.311 rmmod nvme_fabrics 00:18:10.311 rmmod nvme_keyring 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 301762 ']' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 301762 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 301762 ']' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 301762 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 301762 00:18:10.311 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.312 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.312 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 301762' 00:18:10.312 killing process with pid 301762 00:18:10.312 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 301762 00:18:10.312 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 301762 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.572 23:59:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:13.112 00:18:13.112 real 0m32.981s 00:18:13.112 user 1m39.603s 00:18:13.112 sys 0m6.493s 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.112 ************************************ 00:18:13.112 END TEST nvmf_rpc 00:18:13.112 ************************************ 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:13.112 ************************************ 00:18:13.112 START TEST nvmf_invalid 00:18:13.112 ************************************ 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:13.112 * Looking for test storage... 00:18:13.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:13.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.112 --rc genhtml_branch_coverage=1 00:18:13.112 --rc genhtml_function_coverage=1 00:18:13.112 --rc genhtml_legend=1 00:18:13.112 --rc geninfo_all_blocks=1 00:18:13.112 --rc geninfo_unexecuted_blocks=1 00:18:13.112 00:18:13.112 ' 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:13.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.112 --rc genhtml_branch_coverage=1 00:18:13.112 --rc genhtml_function_coverage=1 00:18:13.112 --rc genhtml_legend=1 00:18:13.112 --rc geninfo_all_blocks=1 00:18:13.112 --rc geninfo_unexecuted_blocks=1 00:18:13.112 00:18:13.112 ' 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:13.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.112 --rc genhtml_branch_coverage=1 00:18:13.112 --rc genhtml_function_coverage=1 00:18:13.112 --rc genhtml_legend=1 00:18:13.112 --rc geninfo_all_blocks=1 00:18:13.112 --rc geninfo_unexecuted_blocks=1 00:18:13.112 00:18:13.112 ' 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:13.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.112 --rc genhtml_branch_coverage=1 00:18:13.112 --rc genhtml_function_coverage=1 00:18:13.112 --rc genhtml_legend=1 00:18:13.112 --rc geninfo_all_blocks=1 00:18:13.112 --rc geninfo_unexecuted_blocks=1 00:18:13.112 00:18:13.112 ' 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.112 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:13.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:13.113 23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:19.688 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:19.688 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.688 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:19.689 Found net devices under 0000:86:00.0: cvl_0_0 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:19.689 Found net devices under 0000:86:00.1: cvl_0_1 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:19.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:18:19.689 00:18:19.689 --- 10.0.0.2 ping statistics --- 00:18:19.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.689 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:18:19.689 00:18:19.689 --- 10.0.0.1 ping statistics --- 00:18:19.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.689 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=309434 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 309434 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 309434 ']' 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:19.689 [2024-12-09 23:59:53.756440] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:18:19.689 [2024-12-09 23:59:53.756484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.689 [2024-12-09 23:59:53.834885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.689 [2024-12-09 23:59:53.874828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.689 [2024-12-09 23:59:53.874862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.689 [2024-12-09 23:59:53.874871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.689 [2024-12-09 23:59:53.874877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.689 [2024-12-09 23:59:53.874881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.689 [2024-12-09 23:59:53.876460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.689 [2024-12-09 23:59:53.876565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.689 [2024-12-09 23:59:53.876670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.689 [2024-12-09 23:59:53.876672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.689 23:59:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:19.689 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.689 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:19.689 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode656 00:18:19.689 [2024-12-09 23:59:54.191687] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:19.689 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:19.689 { 00:18:19.689 "nqn": "nqn.2016-06.io.spdk:cnode656", 00:18:19.689 "tgt_name": "foobar", 00:18:19.689 "method": "nvmf_create_subsystem", 00:18:19.689 "req_id": 1 00:18:19.689 } 00:18:19.689 Got JSON-RPC error response 00:18:19.689 response: 00:18:19.689 { 00:18:19.689 "code": -32603, 00:18:19.689 "message": "Unable to find target foobar" 00:18:19.689 }' 00:18:19.689 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:19.689 { 00:18:19.689 "nqn": "nqn.2016-06.io.spdk:cnode656", 00:18:19.689 "tgt_name": "foobar", 00:18:19.689 "method": "nvmf_create_subsystem", 00:18:19.689 "req_id": 1 00:18:19.689 } 00:18:19.689 Got JSON-RPC error response 00:18:19.689 response: 00:18:19.689 { 00:18:19.689 "code": -32603, 00:18:19.689 "message": "Unable to find target foobar" 00:18:19.689 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:19.689 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:19.689 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15366 00:18:19.689 [2024-12-09 23:59:54.400400] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15366: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:19.690 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:19.690 { 00:18:19.690 "nqn": "nqn.2016-06.io.spdk:cnode15366", 00:18:19.690 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:19.690 "method": "nvmf_create_subsystem", 00:18:19.690 "req_id": 1 00:18:19.690 } 00:18:19.690 Got JSON-RPC error response 00:18:19.690 response: 00:18:19.690 { 00:18:19.690 "code": -32602, 00:18:19.690 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:19.690 }' 00:18:19.690 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:19.690 { 00:18:19.690 "nqn": "nqn.2016-06.io.spdk:cnode15366", 00:18:19.690 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:19.690 "method": "nvmf_create_subsystem", 00:18:19.690 "req_id": 1 00:18:19.690 } 00:18:19.690 Got JSON-RPC error response 00:18:19.690 response: 00:18:19.690 { 00:18:19.690 "code": -32602, 00:18:19.690 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:19.690 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:19.690 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:19.690 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27745 00:18:19.690 [2024-12-09 23:59:54.617096] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27745: invalid model number 'SPDK_Controller' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:19.950 { 00:18:19.950 "nqn": "nqn.2016-06.io.spdk:cnode27745", 00:18:19.950 "model_number": "SPDK_Controller\u001f", 00:18:19.950 "method": "nvmf_create_subsystem", 00:18:19.950 "req_id": 1 00:18:19.950 } 00:18:19.950 Got JSON-RPC error response 00:18:19.950 response: 00:18:19.950 { 00:18:19.950 "code": -32602, 00:18:19.950 "message": "Invalid MN SPDK_Controller\u001f" 00:18:19.950 }' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:19.950 { 00:18:19.950 "nqn": "nqn.2016-06.io.spdk:cnode27745", 00:18:19.950 "model_number": "SPDK_Controller\u001f", 00:18:19.950 "method": "nvmf_create_subsystem", 00:18:19.950 "req_id": 1 00:18:19.950 } 00:18:19.950 Got JSON-RPC error response 00:18:19.950 response: 00:18:19.950 { 00:18:19.950 "code": -32602, 00:18:19.950 "message": "Invalid MN SPDK_Controller\u001f" 00:18:19.950 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:19.950 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'hHp/5sbU!OY5t/uD[$vE' 00:18:19.951 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -s 'hHp/5sbU!OY5t/uD[$vE' nqn.2016-06.io.spdk:cnode31107 00:18:20.211 [2024-12-09 23:59:54.962281] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31107: invalid serial number 'hHp/5sbU!OY5t/uD[$vE' 00:18:20.211 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:20.211 { 00:18:20.211 "nqn": "nqn.2016-06.io.spdk:cnode31107", 00:18:20.211 "serial_number": "hHp/5sbU!OY5t/uD[$v\u007fE", 00:18:20.211 "method": "nvmf_create_subsystem", 00:18:20.211 "req_id": 1 00:18:20.211 } 00:18:20.211 Got JSON-RPC error response 00:18:20.211 response: 00:18:20.211 { 00:18:20.211 "code": -32602, 00:18:20.211 "message": "Invalid SN hHp/5sbU!OY5t/uD[$v\u007fE" 00:18:20.211 }' 00:18:20.211 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:20.211 { 00:18:20.211 "nqn": "nqn.2016-06.io.spdk:cnode31107", 00:18:20.211 "serial_number": "hHp/5sbU!OY5t/uD[$v\u007fE", 00:18:20.211 "method": "nvmf_create_subsystem", 00:18:20.211 "req_id": 1 00:18:20.211 } 00:18:20.211 Got JSON-RPC error response 00:18:20.211 response: 00:18:20.211 { 00:18:20.211 "code": -32602, 00:18:20.211 "message": "Invalid SN hHp/5sbU!OY5t/uD[$v\u007fE" 00:18:20.211 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:20.211 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:20.211 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:20.211 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:20.211 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:20.211 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:20.211 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:20.211 23:59:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:20.211 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:20.212 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:20.470 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'tOoCHr2&h@L)hmp(?`\2KMH1wQrHe^J7?y\hdp9C' 00:18:20.471 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -d 'tOoCHr2&h@L)hmp(?`\2KMH1wQrHe^J7?y\hdp9C' nqn.2016-06.io.spdk:cnode25639 00:18:20.729 [2024-12-09 23:59:55.427766] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25639: invalid model number 'tOoCHr2&h@L)hmp(?`\2KMH1wQrHe^J7?y\hdp9C' 00:18:20.729 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:20.729 { 00:18:20.729 "nqn": "nqn.2016-06.io.spdk:cnode25639", 00:18:20.729 "model_number": "tOoCHr2&h@L)hmp(?`\\2KMH1wQrHe\u007f^J7?y\\hdp9C", 00:18:20.729 "method": "nvmf_create_subsystem", 00:18:20.729 "req_id": 1 00:18:20.729 } 00:18:20.729 Got JSON-RPC error response 00:18:20.729 response: 00:18:20.729 { 00:18:20.729 "code": -32602, 00:18:20.729 "message": "Invalid MN tOoCHr2&h@L)hmp(?`\\2KMH1wQrHe\u007f^J7?y\\hdp9C" 00:18:20.729 }' 00:18:20.729 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:20.729 { 00:18:20.729 "nqn": "nqn.2016-06.io.spdk:cnode25639", 00:18:20.729 "model_number": "tOoCHr2&h@L)hmp(?`\\2KMH1wQrHe\u007f^J7?y\\hdp9C", 00:18:20.729 "method": "nvmf_create_subsystem", 00:18:20.729 "req_id": 1 00:18:20.729 } 00:18:20.729 Got JSON-RPC error response 00:18:20.729 response: 00:18:20.729 { 00:18:20.729 "code": -32602, 00:18:20.729 "message": "Invalid MN tOoCHr2&h@L)hmp(?`\\2KMH1wQrHe\u007f^J7?y\\hdp9C" 00:18:20.729 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:20.729 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:20.729 [2024-12-09 23:59:55.632505] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.989 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:20.989 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:20.989 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:20.989 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:20.989 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:20.989 23:59:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:21.248 [2024-12-09 23:59:56.065949] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:21.248 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:21.248 { 00:18:21.248 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:21.248 "listen_address": { 00:18:21.248 "trtype": "tcp", 00:18:21.248 "traddr": "", 00:18:21.248 "trsvcid": "4421" 00:18:21.248 }, 00:18:21.248 "method": "nvmf_subsystem_remove_listener", 00:18:21.248 "req_id": 1 00:18:21.248 } 00:18:21.248 Got JSON-RPC error response 00:18:21.248 response: 00:18:21.248 { 00:18:21.248 "code": -32602, 00:18:21.248 "message": "Invalid parameters" 00:18:21.248 }' 00:18:21.248 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:21.248 { 00:18:21.248 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:21.248 "listen_address": { 00:18:21.248 "trtype": "tcp", 00:18:21.248 "traddr": "", 00:18:21.248 "trsvcid": "4421" 00:18:21.248 }, 00:18:21.248 "method": "nvmf_subsystem_remove_listener", 00:18:21.248 "req_id": 1 00:18:21.248 } 00:18:21.248 Got JSON-RPC error response 00:18:21.248 response: 00:18:21.248 { 00:18:21.248 "code": -32602, 00:18:21.248 "message": "Invalid parameters" 00:18:21.248 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:21.248 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19021 -i 0 00:18:21.508 [2024-12-09 23:59:56.270595] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19021: invalid cntlid range [0-65519] 00:18:21.508 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:21.508 { 00:18:21.508 "nqn": "nqn.2016-06.io.spdk:cnode19021", 00:18:21.508 "min_cntlid": 0, 00:18:21.508 "method": "nvmf_create_subsystem", 00:18:21.508 "req_id": 1 00:18:21.508 } 00:18:21.508 Got JSON-RPC error response 00:18:21.508 response: 00:18:21.508 { 00:18:21.508 "code": -32602, 00:18:21.508 "message": "Invalid cntlid range [0-65519]" 00:18:21.508 }' 00:18:21.508 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:21.508 { 00:18:21.508 "nqn": "nqn.2016-06.io.spdk:cnode19021", 00:18:21.508 "min_cntlid": 0, 00:18:21.508 "method": "nvmf_create_subsystem", 00:18:21.508 "req_id": 1 00:18:21.508 } 00:18:21.508 Got JSON-RPC error response 00:18:21.508 response: 00:18:21.508 { 00:18:21.508 "code": -32602, 00:18:21.508 "message": "Invalid cntlid range [0-65519]" 00:18:21.508 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:21.508 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27976 -i 65520 00:18:21.767 [2024-12-09 23:59:56.467261] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27976: invalid cntlid range [65520-65519] 00:18:21.767 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:21.767 { 00:18:21.767 "nqn": "nqn.2016-06.io.spdk:cnode27976", 00:18:21.767 "min_cntlid": 65520, 00:18:21.767 "method": "nvmf_create_subsystem", 00:18:21.767 "req_id": 1 00:18:21.767 } 00:18:21.767 Got JSON-RPC error response 00:18:21.767 response: 00:18:21.767 { 00:18:21.767 "code": -32602, 00:18:21.767 "message": "Invalid cntlid range [65520-65519]" 00:18:21.767 }' 00:18:21.768 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:21.768 { 00:18:21.768 "nqn": "nqn.2016-06.io.spdk:cnode27976", 00:18:21.768 "min_cntlid": 65520, 00:18:21.768 "method": "nvmf_create_subsystem", 00:18:21.768 "req_id": 1 00:18:21.768 } 00:18:21.768 Got JSON-RPC error response 00:18:21.768 response: 00:18:21.768 { 00:18:21.768 "code": -32602, 00:18:21.768 "message": "Invalid cntlid range [65520-65519]" 00:18:21.768 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:21.768 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21154 -I 0 00:18:21.768 [2024-12-09 23:59:56.667940] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21154: invalid cntlid range [1-0] 00:18:21.768 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:21.768 { 00:18:21.768 "nqn": "nqn.2016-06.io.spdk:cnode21154", 00:18:21.768 "max_cntlid": 0, 00:18:21.768 "method": "nvmf_create_subsystem", 00:18:21.768 "req_id": 1 00:18:21.768 } 00:18:21.768 Got JSON-RPC error response 00:18:21.768 response: 00:18:21.768 { 00:18:21.768 "code": -32602, 00:18:21.768 "message": "Invalid cntlid range [1-0]" 00:18:21.768 }' 00:18:21.768 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:21.768 { 00:18:21.768 "nqn": "nqn.2016-06.io.spdk:cnode21154", 00:18:21.768 "max_cntlid": 0, 00:18:21.768 "method": "nvmf_create_subsystem", 00:18:21.768 "req_id": 1 00:18:21.768 } 00:18:21.768 Got JSON-RPC error response 00:18:21.768 response: 00:18:21.768 { 00:18:21.768 "code": -32602, 00:18:21.768 "message": "Invalid cntlid range [1-0]" 00:18:21.768 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:21.768 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24244 -I 65520 00:18:22.027 [2024-12-09 23:59:56.868626] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24244: invalid cntlid range [1-65520] 00:18:22.027 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:22.027 { 00:18:22.027 "nqn": "nqn.2016-06.io.spdk:cnode24244", 00:18:22.027 "max_cntlid": 65520, 00:18:22.027 "method": "nvmf_create_subsystem", 00:18:22.027 "req_id": 1 00:18:22.027 } 00:18:22.027 Got JSON-RPC error response 00:18:22.027 response: 00:18:22.027 { 00:18:22.027 "code": -32602, 00:18:22.027 "message": "Invalid cntlid range [1-65520]" 00:18:22.027 }' 00:18:22.027 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:22.027 { 00:18:22.027 "nqn": "nqn.2016-06.io.spdk:cnode24244", 00:18:22.027 "max_cntlid": 65520, 00:18:22.027 "method": "nvmf_create_subsystem", 00:18:22.027 "req_id": 1 00:18:22.027 } 00:18:22.027 Got JSON-RPC error response 00:18:22.027 response: 00:18:22.027 { 00:18:22.027 "code": -32602, 00:18:22.027 "message": "Invalid cntlid range [1-65520]" 00:18:22.027 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:22.027 23:59:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13675 -i 6 -I 5 00:18:22.286 [2024-12-09 23:59:57.061324] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13675: invalid cntlid range [6-5] 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:22.286 { 00:18:22.286 "nqn": "nqn.2016-06.io.spdk:cnode13675", 00:18:22.286 "min_cntlid": 6, 00:18:22.286 "max_cntlid": 5, 00:18:22.286 "method": "nvmf_create_subsystem", 00:18:22.286 "req_id": 1 00:18:22.286 } 00:18:22.286 Got JSON-RPC error response 00:18:22.286 response: 00:18:22.286 { 00:18:22.286 "code": -32602, 00:18:22.286 "message": "Invalid cntlid range [6-5]" 00:18:22.286 }' 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:22.286 { 00:18:22.286 "nqn": "nqn.2016-06.io.spdk:cnode13675", 00:18:22.286 "min_cntlid": 6, 00:18:22.286 "max_cntlid": 5, 00:18:22.286 "method": "nvmf_create_subsystem", 00:18:22.286 "req_id": 1 00:18:22.286 } 00:18:22.286 Got JSON-RPC error response 00:18:22.286 response: 00:18:22.286 { 00:18:22.286 "code": -32602, 00:18:22.286 "message": "Invalid cntlid range [6-5]" 00:18:22.286 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:22.286 { 00:18:22.286 "name": "foobar", 00:18:22.286 "method": "nvmf_delete_target", 00:18:22.286 "req_id": 1 00:18:22.286 } 00:18:22.286 Got JSON-RPC error response 00:18:22.286 response: 00:18:22.286 { 00:18:22.286 "code": -32602, 00:18:22.286 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:22.286 }' 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:22.286 { 00:18:22.286 "name": "foobar", 00:18:22.286 "method": "nvmf_delete_target", 00:18:22.286 "req_id": 1 00:18:22.286 } 00:18:22.286 Got JSON-RPC error response 00:18:22.286 response: 00:18:22.286 { 00:18:22.286 "code": -32602, 00:18:22.286 "message": "The specified target doesn't exist, cannot delete it." 00:18:22.286 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:22.286 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:22.286 rmmod nvme_tcp 00:18:22.546 rmmod nvme_fabrics 00:18:22.546 rmmod nvme_keyring 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 309434 ']' 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 309434 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 309434 ']' 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 309434 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 309434 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 309434' 00:18:22.546 killing process with pid 309434 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 309434 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 309434 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:18:22.546 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:22.806 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:22.806 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:22.806 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.806 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.806 23:59:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.715 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:24.715 00:18:24.715 real 0m12.016s 00:18:24.715 user 0m18.642s 00:18:24.715 sys 0m5.378s 00:18:24.715 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.715 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:24.715 ************************************ 00:18:24.715 END TEST nvmf_invalid 00:18:24.715 ************************************ 00:18:24.715 23:59:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:24.715 23:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:24.715 23:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.715 23:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.715 ************************************ 00:18:24.715 START TEST nvmf_connect_stress 00:18:24.715 ************************************ 00:18:24.715 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:24.975 * Looking for test storage... 00:18:24.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:18:24.975 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:24.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.976 --rc genhtml_branch_coverage=1 00:18:24.976 --rc genhtml_function_coverage=1 00:18:24.976 --rc genhtml_legend=1 00:18:24.976 --rc geninfo_all_blocks=1 00:18:24.976 --rc geninfo_unexecuted_blocks=1 00:18:24.976 00:18:24.976 ' 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:24.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.976 --rc genhtml_branch_coverage=1 00:18:24.976 --rc genhtml_function_coverage=1 00:18:24.976 --rc genhtml_legend=1 00:18:24.976 --rc geninfo_all_blocks=1 00:18:24.976 --rc geninfo_unexecuted_blocks=1 00:18:24.976 00:18:24.976 ' 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:24.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.976 --rc genhtml_branch_coverage=1 00:18:24.976 --rc genhtml_function_coverage=1 00:18:24.976 --rc genhtml_legend=1 00:18:24.976 --rc geninfo_all_blocks=1 00:18:24.976 --rc geninfo_unexecuted_blocks=1 00:18:24.976 00:18:24.976 ' 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:24.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.976 --rc genhtml_branch_coverage=1 00:18:24.976 --rc genhtml_function_coverage=1 00:18:24.976 --rc genhtml_legend=1 00:18:24.976 --rc geninfo_all_blocks=1 00:18:24.976 --rc geninfo_unexecuted_blocks=1 00:18:24.976 00:18:24.976 ' 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.976 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.976 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:24.977 23:59:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:31.562 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:31.562 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:31.562 Found net devices under 0000:86:00.0: cvl_0_0 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:31.562 Found net devices under 0000:86:00.1: cvl_0_1 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:31.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:18:31.562 00:18:31.562 --- 10.0.0.2 ping statistics --- 00:18:31.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.562 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:18:31.562 00:18:31.562 --- 10.0.0.1 ping statistics --- 00:18:31.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.562 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=314053 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 314053 00:18:31.562 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:31.563 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 314053 ']' 00:18:31.563 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.563 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.563 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.563 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.563 00:00:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.563 [2024-12-10 00:00:05.828789] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:18:31.563 [2024-12-10 00:00:05.828836] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.563 [2024-12-10 00:00:05.908177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:31.563 [2024-12-10 00:00:05.948008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.563 [2024-12-10 00:00:05.948045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.563 [2024-12-10 00:00:05.948052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.563 [2024-12-10 00:00:05.948058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.563 [2024-12-10 00:00:05.948063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.563 [2024-12-10 00:00:05.949572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.563 [2024-12-10 00:00:05.949676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:31.563 [2024-12-10 00:00:05.949682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.563 [2024-12-10 00:00:06.099319] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.563 [2024-12-10 00:00:06.119534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.563 NULL1 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=314318 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.563 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:31.821 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.821 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:31.821 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:31.821 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.821 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.079 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.079 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:32.079 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.079 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.079 00:00:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.336 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.336 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:32.336 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.336 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.336 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:32.593 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.593 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:32.593 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:32.593 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.593 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.157 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.157 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:33.157 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.157 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.157 00:00:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.414 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.414 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:33.414 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.414 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.414 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.673 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.673 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:33.673 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.673 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.673 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:33.930 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.930 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:33.930 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:33.930 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.930 00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.504 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.504 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:34.504 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.504 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.504 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:34.762 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.762 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:34.762 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:34.762 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.762 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.019 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.019 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:35.019 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.019 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.019 00:00:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.277 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.277 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:35.277 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.277 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.277 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:35.534 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.534 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:35.534 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:35.534 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.534 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.099 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.099 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:36.099 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.099 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.099 00:00:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.356 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.356 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:36.356 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.356 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.356 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.616 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.616 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:36.616 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.616 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.616 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:36.875 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.875 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:36.875 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:36.875 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.875 00:00:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.134 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.134 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:37.134 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.134 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.134 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.701 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.701 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:37.701 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.701 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.701 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:37.960 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.960 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:37.960 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:37.960 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.960 00:00:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.219 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.219 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:38.219 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.219 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.219 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:38.477 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.477 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:38.477 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:38.477 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.477 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.045 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.045 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:39.045 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.045 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.045 00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.304 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.304 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:39.304 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.304 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.304 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.563 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.563 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:39.563 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.563 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.563 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:39.822 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.822 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:39.822 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:39.822 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.822 00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.081 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.081 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:40.081 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.081 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.081 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.647 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.647 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:40.647 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.647 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.647 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:40.906 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.906 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:40.906 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:40.906 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.906 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.165 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.165 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:41.165 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:41.165 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.165 00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:41.425 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 314318 00:18:41.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (314318) - No such process 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 314318 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.425 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:41.425 rmmod nvme_tcp 00:18:41.425 rmmod nvme_fabrics 00:18:41.684 rmmod nvme_keyring 00:18:41.684 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.684 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:41.684 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:41.684 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 314053 ']' 00:18:41.684 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 314053 00:18:41.685 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 314053 ']' 00:18:41.685 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 314053 00:18:41.685 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:41.685 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.685 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314053 00:18:41.685 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.685 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.685 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314053' 00:18:41.685 killing process with pid 314053 00:18:41.685 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 314053 00:18:41.685 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 314053 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.944 00:00:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.858 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:43.858 00:18:43.858 real 0m19.075s 00:18:43.858 user 0m41.225s 00:18:43.858 sys 0m6.960s 00:18:43.858 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.858 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.858 ************************************ 00:18:43.858 END TEST nvmf_connect_stress 00:18:43.858 ************************************ 00:18:43.858 00:00:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:43.858 00:00:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.858 00:00:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.858 00:00:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.858 ************************************ 00:18:43.858 START TEST nvmf_fused_ordering 00:18:43.858 ************************************ 00:18:43.858 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:44.118 * Looking for test storage... 00:18:44.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.118 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:44.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.119 --rc genhtml_branch_coverage=1 00:18:44.119 --rc genhtml_function_coverage=1 00:18:44.119 --rc genhtml_legend=1 00:18:44.119 --rc geninfo_all_blocks=1 00:18:44.119 --rc geninfo_unexecuted_blocks=1 00:18:44.119 00:18:44.119 ' 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:44.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.119 --rc genhtml_branch_coverage=1 00:18:44.119 --rc genhtml_function_coverage=1 00:18:44.119 --rc genhtml_legend=1 00:18:44.119 --rc geninfo_all_blocks=1 00:18:44.119 --rc geninfo_unexecuted_blocks=1 00:18:44.119 00:18:44.119 ' 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:44.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.119 --rc genhtml_branch_coverage=1 00:18:44.119 --rc genhtml_function_coverage=1 00:18:44.119 --rc genhtml_legend=1 00:18:44.119 --rc geninfo_all_blocks=1 00:18:44.119 --rc geninfo_unexecuted_blocks=1 00:18:44.119 00:18:44.119 ' 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:44.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.119 --rc genhtml_branch_coverage=1 00:18:44.119 --rc genhtml_function_coverage=1 00:18:44.119 --rc genhtml_legend=1 00:18:44.119 --rc geninfo_all_blocks=1 00:18:44.119 --rc geninfo_unexecuted_blocks=1 00:18:44.119 00:18:44.119 ' 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:44.119 00:00:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:50.696 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:50.696 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:50.696 Found net devices under 0000:86:00.0: cvl_0_0 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:50.696 Found net devices under 0000:86:00.1: cvl_0_1 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:50.696 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:50.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.493 ms 00:18:50.697 00:18:50.697 --- 10.0.0.2 ping statistics --- 00:18:50.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.697 rtt min/avg/max/mdev = 0.493/0.493/0.493/0.000 ms 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:18:50.697 00:18:50.697 --- 10.0.0.1 ping statistics --- 00:18:50.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.697 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=319525 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 319525 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 319525 ']' 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.697 00:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:50.697 [2024-12-10 00:00:24.954645] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:18:50.697 [2024-12-10 00:00:24.954699] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.697 [2024-12-10 00:00:25.037220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.697 [2024-12-10 00:00:25.077594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.697 [2024-12-10 00:00:25.077627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.697 [2024-12-10 00:00:25.077635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.697 [2024-12-10 00:00:25.077641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.697 [2024-12-10 00:00:25.077646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.697 [2024-12-10 00:00:25.078154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:50.697 [2024-12-10 00:00:25.213116] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:50.697 [2024-12-10 00:00:25.229298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:50.697 NULL1 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.697 00:00:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:50.697 [2024-12-10 00:00:25.287008] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:18:50.697 [2024-12-10 00:00:25.287039] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid319548 ] 00:18:50.958 Attached to nqn.2016-06.io.spdk:cnode1 00:18:50.958 Namespace ID: 1 size: 1GB 00:18:50.958 fused_ordering(0) 00:18:50.958 fused_ordering(1) 00:18:50.958 fused_ordering(2) 00:18:50.958 fused_ordering(3) 00:18:50.958 fused_ordering(4) 00:18:50.958 fused_ordering(5) 00:18:50.958 fused_ordering(6) 00:18:50.958 fused_ordering(7) 00:18:50.958 fused_ordering(8) 00:18:50.958 fused_ordering(9) 00:18:50.958 fused_ordering(10) 00:18:50.958 fused_ordering(11) 00:18:50.958 fused_ordering(12) 00:18:50.958 fused_ordering(13) 00:18:50.958 fused_ordering(14) 00:18:50.958 fused_ordering(15) 00:18:50.958 fused_ordering(16) 00:18:50.958 fused_ordering(17) 00:18:50.958 fused_ordering(18) 00:18:50.958 fused_ordering(19) 00:18:50.958 fused_ordering(20) 00:18:50.958 fused_ordering(21) 00:18:50.958 fused_ordering(22) 00:18:50.958 fused_ordering(23) 00:18:50.958 fused_ordering(24) 00:18:50.958 fused_ordering(25) 00:18:50.958 fused_ordering(26) 00:18:50.958 fused_ordering(27) 00:18:50.958 fused_ordering(28) 00:18:50.958 fused_ordering(29) 00:18:50.958 fused_ordering(30) 00:18:50.958 fused_ordering(31) 00:18:50.958 fused_ordering(32) 00:18:50.958 fused_ordering(33) 00:18:50.958 fused_ordering(34) 00:18:50.958 fused_ordering(35) 00:18:50.958 fused_ordering(36) 00:18:50.958 fused_ordering(37) 00:18:50.958 fused_ordering(38) 00:18:50.958 fused_ordering(39) 00:18:50.958 fused_ordering(40) 00:18:50.958 fused_ordering(41) 00:18:50.958 fused_ordering(42) 00:18:50.958 fused_ordering(43) 00:18:50.958 fused_ordering(44) 00:18:50.958 fused_ordering(45) 00:18:50.958 fused_ordering(46) 00:18:50.958 fused_ordering(47) 00:18:50.958 fused_ordering(48) 00:18:50.958 fused_ordering(49) 00:18:50.958 fused_ordering(50) 00:18:50.958 fused_ordering(51) 00:18:50.958 fused_ordering(52) 00:18:50.958 fused_ordering(53) 00:18:50.958 fused_ordering(54) 00:18:50.958 fused_ordering(55) 00:18:50.958 fused_ordering(56) 00:18:50.958 fused_ordering(57) 00:18:50.958 fused_ordering(58) 00:18:50.958 fused_ordering(59) 00:18:50.958 fused_ordering(60) 00:18:50.958 fused_ordering(61) 00:18:50.958 fused_ordering(62) 00:18:50.958 fused_ordering(63) 00:18:50.958 fused_ordering(64) 00:18:50.958 fused_ordering(65) 00:18:50.958 fused_ordering(66) 00:18:50.958 fused_ordering(67) 00:18:50.958 fused_ordering(68) 00:18:50.958 fused_ordering(69) 00:18:50.958 fused_ordering(70) 00:18:50.958 fused_ordering(71) 00:18:50.958 fused_ordering(72) 00:18:50.958 fused_ordering(73) 00:18:50.958 fused_ordering(74) 00:18:50.958 fused_ordering(75) 00:18:50.958 fused_ordering(76) 00:18:50.958 fused_ordering(77) 00:18:50.958 fused_ordering(78) 00:18:50.958 fused_ordering(79) 00:18:50.958 fused_ordering(80) 00:18:50.958 fused_ordering(81) 00:18:50.958 fused_ordering(82) 00:18:50.958 fused_ordering(83) 00:18:50.958 fused_ordering(84) 00:18:50.958 fused_ordering(85) 00:18:50.958 fused_ordering(86) 00:18:50.958 fused_ordering(87) 00:18:50.958 fused_ordering(88) 00:18:50.958 fused_ordering(89) 00:18:50.958 fused_ordering(90) 00:18:50.958 fused_ordering(91) 00:18:50.958 fused_ordering(92) 00:18:50.958 fused_ordering(93) 00:18:50.958 fused_ordering(94) 00:18:50.958 fused_ordering(95) 00:18:50.958 fused_ordering(96) 00:18:50.958 fused_ordering(97) 00:18:50.958 fused_ordering(98) 00:18:50.958 fused_ordering(99) 00:18:50.958 fused_ordering(100) 00:18:50.958 fused_ordering(101) 00:18:50.958 fused_ordering(102) 00:18:50.958 fused_ordering(103) 00:18:50.958 fused_ordering(104) 00:18:50.958 fused_ordering(105) 00:18:50.958 fused_ordering(106) 00:18:50.958 fused_ordering(107) 00:18:50.958 fused_ordering(108) 00:18:50.958 fused_ordering(109) 00:18:50.958 fused_ordering(110) 00:18:50.958 fused_ordering(111) 00:18:50.958 fused_ordering(112) 00:18:50.958 fused_ordering(113) 00:18:50.958 fused_ordering(114) 00:18:50.958 fused_ordering(115) 00:18:50.958 fused_ordering(116) 00:18:50.958 fused_ordering(117) 00:18:50.958 fused_ordering(118) 00:18:50.958 fused_ordering(119) 00:18:50.958 fused_ordering(120) 00:18:50.958 fused_ordering(121) 00:18:50.958 fused_ordering(122) 00:18:50.958 fused_ordering(123) 00:18:50.958 fused_ordering(124) 00:18:50.958 fused_ordering(125) 00:18:50.958 fused_ordering(126) 00:18:50.958 fused_ordering(127) 00:18:50.958 fused_ordering(128) 00:18:50.958 fused_ordering(129) 00:18:50.958 fused_ordering(130) 00:18:50.958 fused_ordering(131) 00:18:50.958 fused_ordering(132) 00:18:50.958 fused_ordering(133) 00:18:50.958 fused_ordering(134) 00:18:50.958 fused_ordering(135) 00:18:50.958 fused_ordering(136) 00:18:50.958 fused_ordering(137) 00:18:50.958 fused_ordering(138) 00:18:50.958 fused_ordering(139) 00:18:50.958 fused_ordering(140) 00:18:50.958 fused_ordering(141) 00:18:50.958 fused_ordering(142) 00:18:50.958 fused_ordering(143) 00:18:50.958 fused_ordering(144) 00:18:50.958 fused_ordering(145) 00:18:50.958 fused_ordering(146) 00:18:50.958 fused_ordering(147) 00:18:50.958 fused_ordering(148) 00:18:50.958 fused_ordering(149) 00:18:50.958 fused_ordering(150) 00:18:50.958 fused_ordering(151) 00:18:50.958 fused_ordering(152) 00:18:50.958 fused_ordering(153) 00:18:50.959 fused_ordering(154) 00:18:50.959 fused_ordering(155) 00:18:50.959 fused_ordering(156) 00:18:50.959 fused_ordering(157) 00:18:50.959 fused_ordering(158) 00:18:50.959 fused_ordering(159) 00:18:50.959 fused_ordering(160) 00:18:50.959 fused_ordering(161) 00:18:50.959 fused_ordering(162) 00:18:50.959 fused_ordering(163) 00:18:50.959 fused_ordering(164) 00:18:50.959 fused_ordering(165) 00:18:50.959 fused_ordering(166) 00:18:50.959 fused_ordering(167) 00:18:50.959 fused_ordering(168) 00:18:50.959 fused_ordering(169) 00:18:50.959 fused_ordering(170) 00:18:50.959 fused_ordering(171) 00:18:50.959 fused_ordering(172) 00:18:50.959 fused_ordering(173) 00:18:50.959 fused_ordering(174) 00:18:50.959 fused_ordering(175) 00:18:50.959 fused_ordering(176) 00:18:50.959 fused_ordering(177) 00:18:50.959 fused_ordering(178) 00:18:50.959 fused_ordering(179) 00:18:50.959 fused_ordering(180) 00:18:50.959 fused_ordering(181) 00:18:50.959 fused_ordering(182) 00:18:50.959 fused_ordering(183) 00:18:50.959 fused_ordering(184) 00:18:50.959 fused_ordering(185) 00:18:50.959 fused_ordering(186) 00:18:50.959 fused_ordering(187) 00:18:50.959 fused_ordering(188) 00:18:50.959 fused_ordering(189) 00:18:50.959 fused_ordering(190) 00:18:50.959 fused_ordering(191) 00:18:50.959 fused_ordering(192) 00:18:50.959 fused_ordering(193) 00:18:50.959 fused_ordering(194) 00:18:50.959 fused_ordering(195) 00:18:50.959 fused_ordering(196) 00:18:50.959 fused_ordering(197) 00:18:50.959 fused_ordering(198) 00:18:50.959 fused_ordering(199) 00:18:50.959 fused_ordering(200) 00:18:50.959 fused_ordering(201) 00:18:50.959 fused_ordering(202) 00:18:50.959 fused_ordering(203) 00:18:50.959 fused_ordering(204) 00:18:50.959 fused_ordering(205) 00:18:51.219 fused_ordering(206) 00:18:51.219 fused_ordering(207) 00:18:51.219 fused_ordering(208) 00:18:51.219 fused_ordering(209) 00:18:51.219 fused_ordering(210) 00:18:51.219 fused_ordering(211) 00:18:51.219 fused_ordering(212) 00:18:51.219 fused_ordering(213) 00:18:51.219 fused_ordering(214) 00:18:51.219 fused_ordering(215) 00:18:51.219 fused_ordering(216) 00:18:51.219 fused_ordering(217) 00:18:51.219 fused_ordering(218) 00:18:51.219 fused_ordering(219) 00:18:51.219 fused_ordering(220) 00:18:51.219 fused_ordering(221) 00:18:51.219 fused_ordering(222) 00:18:51.219 fused_ordering(223) 00:18:51.219 fused_ordering(224) 00:18:51.219 fused_ordering(225) 00:18:51.219 fused_ordering(226) 00:18:51.219 fused_ordering(227) 00:18:51.219 fused_ordering(228) 00:18:51.219 fused_ordering(229) 00:18:51.219 fused_ordering(230) 00:18:51.219 fused_ordering(231) 00:18:51.219 fused_ordering(232) 00:18:51.219 fused_ordering(233) 00:18:51.219 fused_ordering(234) 00:18:51.219 fused_ordering(235) 00:18:51.219 fused_ordering(236) 00:18:51.219 fused_ordering(237) 00:18:51.219 fused_ordering(238) 00:18:51.219 fused_ordering(239) 00:18:51.219 fused_ordering(240) 00:18:51.219 fused_ordering(241) 00:18:51.219 fused_ordering(242) 00:18:51.219 fused_ordering(243) 00:18:51.219 fused_ordering(244) 00:18:51.219 fused_ordering(245) 00:18:51.219 fused_ordering(246) 00:18:51.219 fused_ordering(247) 00:18:51.219 fused_ordering(248) 00:18:51.219 fused_ordering(249) 00:18:51.219 fused_ordering(250) 00:18:51.219 fused_ordering(251) 00:18:51.219 fused_ordering(252) 00:18:51.219 fused_ordering(253) 00:18:51.219 fused_ordering(254) 00:18:51.219 fused_ordering(255) 00:18:51.219 fused_ordering(256) 00:18:51.219 fused_ordering(257) 00:18:51.219 fused_ordering(258) 00:18:51.219 fused_ordering(259) 00:18:51.219 fused_ordering(260) 00:18:51.219 fused_ordering(261) 00:18:51.219 fused_ordering(262) 00:18:51.219 fused_ordering(263) 00:18:51.219 fused_ordering(264) 00:18:51.219 fused_ordering(265) 00:18:51.219 fused_ordering(266) 00:18:51.219 fused_ordering(267) 00:18:51.219 fused_ordering(268) 00:18:51.219 fused_ordering(269) 00:18:51.219 fused_ordering(270) 00:18:51.219 fused_ordering(271) 00:18:51.219 fused_ordering(272) 00:18:51.219 fused_ordering(273) 00:18:51.219 fused_ordering(274) 00:18:51.219 fused_ordering(275) 00:18:51.219 fused_ordering(276) 00:18:51.219 fused_ordering(277) 00:18:51.219 fused_ordering(278) 00:18:51.219 fused_ordering(279) 00:18:51.219 fused_ordering(280) 00:18:51.219 fused_ordering(281) 00:18:51.219 fused_ordering(282) 00:18:51.219 fused_ordering(283) 00:18:51.219 fused_ordering(284) 00:18:51.219 fused_ordering(285) 00:18:51.219 fused_ordering(286) 00:18:51.219 fused_ordering(287) 00:18:51.219 fused_ordering(288) 00:18:51.219 fused_ordering(289) 00:18:51.219 fused_ordering(290) 00:18:51.219 fused_ordering(291) 00:18:51.219 fused_ordering(292) 00:18:51.219 fused_ordering(293) 00:18:51.219 fused_ordering(294) 00:18:51.219 fused_ordering(295) 00:18:51.219 fused_ordering(296) 00:18:51.219 fused_ordering(297) 00:18:51.219 fused_ordering(298) 00:18:51.219 fused_ordering(299) 00:18:51.219 fused_ordering(300) 00:18:51.219 fused_ordering(301) 00:18:51.219 fused_ordering(302) 00:18:51.219 fused_ordering(303) 00:18:51.219 fused_ordering(304) 00:18:51.219 fused_ordering(305) 00:18:51.219 fused_ordering(306) 00:18:51.219 fused_ordering(307) 00:18:51.219 fused_ordering(308) 00:18:51.219 fused_ordering(309) 00:18:51.219 fused_ordering(310) 00:18:51.219 fused_ordering(311) 00:18:51.219 fused_ordering(312) 00:18:51.219 fused_ordering(313) 00:18:51.219 fused_ordering(314) 00:18:51.219 fused_ordering(315) 00:18:51.219 fused_ordering(316) 00:18:51.219 fused_ordering(317) 00:18:51.219 fused_ordering(318) 00:18:51.219 fused_ordering(319) 00:18:51.219 fused_ordering(320) 00:18:51.219 fused_ordering(321) 00:18:51.219 fused_ordering(322) 00:18:51.219 fused_ordering(323) 00:18:51.219 fused_ordering(324) 00:18:51.219 fused_ordering(325) 00:18:51.219 fused_ordering(326) 00:18:51.219 fused_ordering(327) 00:18:51.219 fused_ordering(328) 00:18:51.219 fused_ordering(329) 00:18:51.219 fused_ordering(330) 00:18:51.219 fused_ordering(331) 00:18:51.219 fused_ordering(332) 00:18:51.219 fused_ordering(333) 00:18:51.219 fused_ordering(334) 00:18:51.219 fused_ordering(335) 00:18:51.219 fused_ordering(336) 00:18:51.219 fused_ordering(337) 00:18:51.219 fused_ordering(338) 00:18:51.219 fused_ordering(339) 00:18:51.219 fused_ordering(340) 00:18:51.219 fused_ordering(341) 00:18:51.219 fused_ordering(342) 00:18:51.219 fused_ordering(343) 00:18:51.219 fused_ordering(344) 00:18:51.219 fused_ordering(345) 00:18:51.219 fused_ordering(346) 00:18:51.219 fused_ordering(347) 00:18:51.219 fused_ordering(348) 00:18:51.219 fused_ordering(349) 00:18:51.219 fused_ordering(350) 00:18:51.219 fused_ordering(351) 00:18:51.219 fused_ordering(352) 00:18:51.219 fused_ordering(353) 00:18:51.219 fused_ordering(354) 00:18:51.219 fused_ordering(355) 00:18:51.219 fused_ordering(356) 00:18:51.219 fused_ordering(357) 00:18:51.219 fused_ordering(358) 00:18:51.219 fused_ordering(359) 00:18:51.219 fused_ordering(360) 00:18:51.219 fused_ordering(361) 00:18:51.219 fused_ordering(362) 00:18:51.219 fused_ordering(363) 00:18:51.219 fused_ordering(364) 00:18:51.219 fused_ordering(365) 00:18:51.219 fused_ordering(366) 00:18:51.219 fused_ordering(367) 00:18:51.219 fused_ordering(368) 00:18:51.219 fused_ordering(369) 00:18:51.219 fused_ordering(370) 00:18:51.219 fused_ordering(371) 00:18:51.219 fused_ordering(372) 00:18:51.219 fused_ordering(373) 00:18:51.219 fused_ordering(374) 00:18:51.219 fused_ordering(375) 00:18:51.219 fused_ordering(376) 00:18:51.219 fused_ordering(377) 00:18:51.219 fused_ordering(378) 00:18:51.219 fused_ordering(379) 00:18:51.219 fused_ordering(380) 00:18:51.219 fused_ordering(381) 00:18:51.219 fused_ordering(382) 00:18:51.219 fused_ordering(383) 00:18:51.219 fused_ordering(384) 00:18:51.219 fused_ordering(385) 00:18:51.219 fused_ordering(386) 00:18:51.219 fused_ordering(387) 00:18:51.219 fused_ordering(388) 00:18:51.219 fused_ordering(389) 00:18:51.219 fused_ordering(390) 00:18:51.219 fused_ordering(391) 00:18:51.219 fused_ordering(392) 00:18:51.219 fused_ordering(393) 00:18:51.219 fused_ordering(394) 00:18:51.219 fused_ordering(395) 00:18:51.219 fused_ordering(396) 00:18:51.219 fused_ordering(397) 00:18:51.219 fused_ordering(398) 00:18:51.219 fused_ordering(399) 00:18:51.219 fused_ordering(400) 00:18:51.219 fused_ordering(401) 00:18:51.219 fused_ordering(402) 00:18:51.219 fused_ordering(403) 00:18:51.219 fused_ordering(404) 00:18:51.219 fused_ordering(405) 00:18:51.219 fused_ordering(406) 00:18:51.219 fused_ordering(407) 00:18:51.219 fused_ordering(408) 00:18:51.219 fused_ordering(409) 00:18:51.219 fused_ordering(410) 00:18:51.477 fused_ordering(411) 00:18:51.477 fused_ordering(412) 00:18:51.477 fused_ordering(413) 00:18:51.477 fused_ordering(414) 00:18:51.477 fused_ordering(415) 00:18:51.477 fused_ordering(416) 00:18:51.477 fused_ordering(417) 00:18:51.477 fused_ordering(418) 00:18:51.477 fused_ordering(419) 00:18:51.477 fused_ordering(420) 00:18:51.477 fused_ordering(421) 00:18:51.477 fused_ordering(422) 00:18:51.477 fused_ordering(423) 00:18:51.477 fused_ordering(424) 00:18:51.477 fused_ordering(425) 00:18:51.477 fused_ordering(426) 00:18:51.477 fused_ordering(427) 00:18:51.477 fused_ordering(428) 00:18:51.477 fused_ordering(429) 00:18:51.477 fused_ordering(430) 00:18:51.477 fused_ordering(431) 00:18:51.477 fused_ordering(432) 00:18:51.477 fused_ordering(433) 00:18:51.477 fused_ordering(434) 00:18:51.477 fused_ordering(435) 00:18:51.477 fused_ordering(436) 00:18:51.477 fused_ordering(437) 00:18:51.477 fused_ordering(438) 00:18:51.477 fused_ordering(439) 00:18:51.477 fused_ordering(440) 00:18:51.477 fused_ordering(441) 00:18:51.477 fused_ordering(442) 00:18:51.477 fused_ordering(443) 00:18:51.477 fused_ordering(444) 00:18:51.477 fused_ordering(445) 00:18:51.477 fused_ordering(446) 00:18:51.477 fused_ordering(447) 00:18:51.478 fused_ordering(448) 00:18:51.478 fused_ordering(449) 00:18:51.478 fused_ordering(450) 00:18:51.478 fused_ordering(451) 00:18:51.478 fused_ordering(452) 00:18:51.478 fused_ordering(453) 00:18:51.478 fused_ordering(454) 00:18:51.478 fused_ordering(455) 00:18:51.478 fused_ordering(456) 00:18:51.478 fused_ordering(457) 00:18:51.478 fused_ordering(458) 00:18:51.478 fused_ordering(459) 00:18:51.478 fused_ordering(460) 00:18:51.478 fused_ordering(461) 00:18:51.478 fused_ordering(462) 00:18:51.478 fused_ordering(463) 00:18:51.478 fused_ordering(464) 00:18:51.478 fused_ordering(465) 00:18:51.478 fused_ordering(466) 00:18:51.478 fused_ordering(467) 00:18:51.478 fused_ordering(468) 00:18:51.478 fused_ordering(469) 00:18:51.478 fused_ordering(470) 00:18:51.478 fused_ordering(471) 00:18:51.478 fused_ordering(472) 00:18:51.478 fused_ordering(473) 00:18:51.478 fused_ordering(474) 00:18:51.478 fused_ordering(475) 00:18:51.478 fused_ordering(476) 00:18:51.478 fused_ordering(477) 00:18:51.478 fused_ordering(478) 00:18:51.478 fused_ordering(479) 00:18:51.478 fused_ordering(480) 00:18:51.478 fused_ordering(481) 00:18:51.478 fused_ordering(482) 00:18:51.478 fused_ordering(483) 00:18:51.478 fused_ordering(484) 00:18:51.478 fused_ordering(485) 00:18:51.478 fused_ordering(486) 00:18:51.478 fused_ordering(487) 00:18:51.478 fused_ordering(488) 00:18:51.478 fused_ordering(489) 00:18:51.478 fused_ordering(490) 00:18:51.478 fused_ordering(491) 00:18:51.478 fused_ordering(492) 00:18:51.478 fused_ordering(493) 00:18:51.478 fused_ordering(494) 00:18:51.478 fused_ordering(495) 00:18:51.478 fused_ordering(496) 00:18:51.478 fused_ordering(497) 00:18:51.478 fused_ordering(498) 00:18:51.478 fused_ordering(499) 00:18:51.478 fused_ordering(500) 00:18:51.478 fused_ordering(501) 00:18:51.478 fused_ordering(502) 00:18:51.478 fused_ordering(503) 00:18:51.478 fused_ordering(504) 00:18:51.478 fused_ordering(505) 00:18:51.478 fused_ordering(506) 00:18:51.478 fused_ordering(507) 00:18:51.478 fused_ordering(508) 00:18:51.478 fused_ordering(509) 00:18:51.478 fused_ordering(510) 00:18:51.478 fused_ordering(511) 00:18:51.478 fused_ordering(512) 00:18:51.478 fused_ordering(513) 00:18:51.478 fused_ordering(514) 00:18:51.478 fused_ordering(515) 00:18:51.478 fused_ordering(516) 00:18:51.478 fused_ordering(517) 00:18:51.478 fused_ordering(518) 00:18:51.478 fused_ordering(519) 00:18:51.478 fused_ordering(520) 00:18:51.478 fused_ordering(521) 00:18:51.478 fused_ordering(522) 00:18:51.478 fused_ordering(523) 00:18:51.478 fused_ordering(524) 00:18:51.478 fused_ordering(525) 00:18:51.478 fused_ordering(526) 00:18:51.478 fused_ordering(527) 00:18:51.478 fused_ordering(528) 00:18:51.478 fused_ordering(529) 00:18:51.478 fused_ordering(530) 00:18:51.478 fused_ordering(531) 00:18:51.478 fused_ordering(532) 00:18:51.478 fused_ordering(533) 00:18:51.478 fused_ordering(534) 00:18:51.478 fused_ordering(535) 00:18:51.478 fused_ordering(536) 00:18:51.478 fused_ordering(537) 00:18:51.478 fused_ordering(538) 00:18:51.478 fused_ordering(539) 00:18:51.478 fused_ordering(540) 00:18:51.478 fused_ordering(541) 00:18:51.478 fused_ordering(542) 00:18:51.478 fused_ordering(543) 00:18:51.478 fused_ordering(544) 00:18:51.478 fused_ordering(545) 00:18:51.478 fused_ordering(546) 00:18:51.478 fused_ordering(547) 00:18:51.478 fused_ordering(548) 00:18:51.478 fused_ordering(549) 00:18:51.478 fused_ordering(550) 00:18:51.478 fused_ordering(551) 00:18:51.478 fused_ordering(552) 00:18:51.478 fused_ordering(553) 00:18:51.478 fused_ordering(554) 00:18:51.478 fused_ordering(555) 00:18:51.478 fused_ordering(556) 00:18:51.478 fused_ordering(557) 00:18:51.478 fused_ordering(558) 00:18:51.478 fused_ordering(559) 00:18:51.478 fused_ordering(560) 00:18:51.478 fused_ordering(561) 00:18:51.478 fused_ordering(562) 00:18:51.478 fused_ordering(563) 00:18:51.478 fused_ordering(564) 00:18:51.478 fused_ordering(565) 00:18:51.478 fused_ordering(566) 00:18:51.478 fused_ordering(567) 00:18:51.478 fused_ordering(568) 00:18:51.478 fused_ordering(569) 00:18:51.478 fused_ordering(570) 00:18:51.478 fused_ordering(571) 00:18:51.478 fused_ordering(572) 00:18:51.478 fused_ordering(573) 00:18:51.478 fused_ordering(574) 00:18:51.478 fused_ordering(575) 00:18:51.478 fused_ordering(576) 00:18:51.478 fused_ordering(577) 00:18:51.478 fused_ordering(578) 00:18:51.478 fused_ordering(579) 00:18:51.478 fused_ordering(580) 00:18:51.478 fused_ordering(581) 00:18:51.478 fused_ordering(582) 00:18:51.478 fused_ordering(583) 00:18:51.478 fused_ordering(584) 00:18:51.478 fused_ordering(585) 00:18:51.478 fused_ordering(586) 00:18:51.478 fused_ordering(587) 00:18:51.478 fused_ordering(588) 00:18:51.478 fused_ordering(589) 00:18:51.478 fused_ordering(590) 00:18:51.478 fused_ordering(591) 00:18:51.478 fused_ordering(592) 00:18:51.478 fused_ordering(593) 00:18:51.478 fused_ordering(594) 00:18:51.478 fused_ordering(595) 00:18:51.478 fused_ordering(596) 00:18:51.478 fused_ordering(597) 00:18:51.478 fused_ordering(598) 00:18:51.478 fused_ordering(599) 00:18:51.478 fused_ordering(600) 00:18:51.478 fused_ordering(601) 00:18:51.478 fused_ordering(602) 00:18:51.478 fused_ordering(603) 00:18:51.478 fused_ordering(604) 00:18:51.478 fused_ordering(605) 00:18:51.478 fused_ordering(606) 00:18:51.478 fused_ordering(607) 00:18:51.478 fused_ordering(608) 00:18:51.478 fused_ordering(609) 00:18:51.478 fused_ordering(610) 00:18:51.478 fused_ordering(611) 00:18:51.478 fused_ordering(612) 00:18:51.478 fused_ordering(613) 00:18:51.478 fused_ordering(614) 00:18:51.478 fused_ordering(615) 00:18:51.737 fused_ordering(616) 00:18:51.737 fused_ordering(617) 00:18:51.737 fused_ordering(618) 00:18:51.737 fused_ordering(619) 00:18:51.737 fused_ordering(620) 00:18:51.737 fused_ordering(621) 00:18:51.737 fused_ordering(622) 00:18:51.737 fused_ordering(623) 00:18:51.737 fused_ordering(624) 00:18:51.737 fused_ordering(625) 00:18:51.737 fused_ordering(626) 00:18:51.737 fused_ordering(627) 00:18:51.737 fused_ordering(628) 00:18:51.737 fused_ordering(629) 00:18:51.737 fused_ordering(630) 00:18:51.737 fused_ordering(631) 00:18:51.737 fused_ordering(632) 00:18:51.737 fused_ordering(633) 00:18:51.737 fused_ordering(634) 00:18:51.737 fused_ordering(635) 00:18:51.737 fused_ordering(636) 00:18:51.737 fused_ordering(637) 00:18:51.737 fused_ordering(638) 00:18:51.737 fused_ordering(639) 00:18:51.737 fused_ordering(640) 00:18:51.737 fused_ordering(641) 00:18:51.737 fused_ordering(642) 00:18:51.737 fused_ordering(643) 00:18:51.737 fused_ordering(644) 00:18:51.737 fused_ordering(645) 00:18:51.737 fused_ordering(646) 00:18:51.737 fused_ordering(647) 00:18:51.737 fused_ordering(648) 00:18:51.737 fused_ordering(649) 00:18:51.737 fused_ordering(650) 00:18:51.737 fused_ordering(651) 00:18:51.737 fused_ordering(652) 00:18:51.737 fused_ordering(653) 00:18:51.737 fused_ordering(654) 00:18:51.737 fused_ordering(655) 00:18:51.737 fused_ordering(656) 00:18:51.737 fused_ordering(657) 00:18:51.737 fused_ordering(658) 00:18:51.737 fused_ordering(659) 00:18:51.737 fused_ordering(660) 00:18:51.737 fused_ordering(661) 00:18:51.737 fused_ordering(662) 00:18:51.737 fused_ordering(663) 00:18:51.737 fused_ordering(664) 00:18:51.737 fused_ordering(665) 00:18:51.737 fused_ordering(666) 00:18:51.737 fused_ordering(667) 00:18:51.737 fused_ordering(668) 00:18:51.737 fused_ordering(669) 00:18:51.737 fused_ordering(670) 00:18:51.737 fused_ordering(671) 00:18:51.737 fused_ordering(672) 00:18:51.737 fused_ordering(673) 00:18:51.737 fused_ordering(674) 00:18:51.737 fused_ordering(675) 00:18:51.737 fused_ordering(676) 00:18:51.737 fused_ordering(677) 00:18:51.737 fused_ordering(678) 00:18:51.737 fused_ordering(679) 00:18:51.737 fused_ordering(680) 00:18:51.737 fused_ordering(681) 00:18:51.737 fused_ordering(682) 00:18:51.737 fused_ordering(683) 00:18:51.737 fused_ordering(684) 00:18:51.737 fused_ordering(685) 00:18:51.737 fused_ordering(686) 00:18:51.737 fused_ordering(687) 00:18:51.737 fused_ordering(688) 00:18:51.737 fused_ordering(689) 00:18:51.737 fused_ordering(690) 00:18:51.737 fused_ordering(691) 00:18:51.737 fused_ordering(692) 00:18:51.737 fused_ordering(693) 00:18:51.737 fused_ordering(694) 00:18:51.737 fused_ordering(695) 00:18:51.737 fused_ordering(696) 00:18:51.737 fused_ordering(697) 00:18:51.737 fused_ordering(698) 00:18:51.737 fused_ordering(699) 00:18:51.737 fused_ordering(700) 00:18:51.737 fused_ordering(701) 00:18:51.738 fused_ordering(702) 00:18:51.738 fused_ordering(703) 00:18:51.738 fused_ordering(704) 00:18:51.738 fused_ordering(705) 00:18:51.738 fused_ordering(706) 00:18:51.738 fused_ordering(707) 00:18:51.738 fused_ordering(708) 00:18:51.738 fused_ordering(709) 00:18:51.738 fused_ordering(710) 00:18:51.738 fused_ordering(711) 00:18:51.738 fused_ordering(712) 00:18:51.738 fused_ordering(713) 00:18:51.738 fused_ordering(714) 00:18:51.738 fused_ordering(715) 00:18:51.738 fused_ordering(716) 00:18:51.738 fused_ordering(717) 00:18:51.738 fused_ordering(718) 00:18:51.738 fused_ordering(719) 00:18:51.738 fused_ordering(720) 00:18:51.738 fused_ordering(721) 00:18:51.738 fused_ordering(722) 00:18:51.738 fused_ordering(723) 00:18:51.738 fused_ordering(724) 00:18:51.738 fused_ordering(725) 00:18:51.738 fused_ordering(726) 00:18:51.738 fused_ordering(727) 00:18:51.738 fused_ordering(728) 00:18:51.738 fused_ordering(729) 00:18:51.738 fused_ordering(730) 00:18:51.738 fused_ordering(731) 00:18:51.738 fused_ordering(732) 00:18:51.738 fused_ordering(733) 00:18:51.738 fused_ordering(734) 00:18:51.738 fused_ordering(735) 00:18:51.738 fused_ordering(736) 00:18:51.738 fused_ordering(737) 00:18:51.738 fused_ordering(738) 00:18:51.738 fused_ordering(739) 00:18:51.738 fused_ordering(740) 00:18:51.738 fused_ordering(741) 00:18:51.738 fused_ordering(742) 00:18:51.738 fused_ordering(743) 00:18:51.738 fused_ordering(744) 00:18:51.738 fused_ordering(745) 00:18:51.738 fused_ordering(746) 00:18:51.738 fused_ordering(747) 00:18:51.738 fused_ordering(748) 00:18:51.738 fused_ordering(749) 00:18:51.738 fused_ordering(750) 00:18:51.738 fused_ordering(751) 00:18:51.738 fused_ordering(752) 00:18:51.738 fused_ordering(753) 00:18:51.738 fused_ordering(754) 00:18:51.738 fused_ordering(755) 00:18:51.738 fused_ordering(756) 00:18:51.738 fused_ordering(757) 00:18:51.738 fused_ordering(758) 00:18:51.738 fused_ordering(759) 00:18:51.738 fused_ordering(760) 00:18:51.738 fused_ordering(761) 00:18:51.738 fused_ordering(762) 00:18:51.738 fused_ordering(763) 00:18:51.738 fused_ordering(764) 00:18:51.738 fused_ordering(765) 00:18:51.738 fused_ordering(766) 00:18:51.738 fused_ordering(767) 00:18:51.738 fused_ordering(768) 00:18:51.738 fused_ordering(769) 00:18:51.738 fused_ordering(770) 00:18:51.738 fused_ordering(771) 00:18:51.738 fused_ordering(772) 00:18:51.738 fused_ordering(773) 00:18:51.738 fused_ordering(774) 00:18:51.738 fused_ordering(775) 00:18:51.738 fused_ordering(776) 00:18:51.738 fused_ordering(777) 00:18:51.738 fused_ordering(778) 00:18:51.738 fused_ordering(779) 00:18:51.738 fused_ordering(780) 00:18:51.738 fused_ordering(781) 00:18:51.738 fused_ordering(782) 00:18:51.738 fused_ordering(783) 00:18:51.738 fused_ordering(784) 00:18:51.738 fused_ordering(785) 00:18:51.738 fused_ordering(786) 00:18:51.738 fused_ordering(787) 00:18:51.738 fused_ordering(788) 00:18:51.738 fused_ordering(789) 00:18:51.738 fused_ordering(790) 00:18:51.738 fused_ordering(791) 00:18:51.738 fused_ordering(792) 00:18:51.738 fused_ordering(793) 00:18:51.738 fused_ordering(794) 00:18:51.738 fused_ordering(795) 00:18:51.738 fused_ordering(796) 00:18:51.738 fused_ordering(797) 00:18:51.738 fused_ordering(798) 00:18:51.738 fused_ordering(799) 00:18:51.738 fused_ordering(800) 00:18:51.738 fused_ordering(801) 00:18:51.738 fused_ordering(802) 00:18:51.738 fused_ordering(803) 00:18:51.738 fused_ordering(804) 00:18:51.738 fused_ordering(805) 00:18:51.738 fused_ordering(806) 00:18:51.738 fused_ordering(807) 00:18:51.738 fused_ordering(808) 00:18:51.738 fused_ordering(809) 00:18:51.738 fused_ordering(810) 00:18:51.738 fused_ordering(811) 00:18:51.738 fused_ordering(812) 00:18:51.738 fused_ordering(813) 00:18:51.738 fused_ordering(814) 00:18:51.738 fused_ordering(815) 00:18:51.738 fused_ordering(816) 00:18:51.738 fused_ordering(817) 00:18:51.738 fused_ordering(818) 00:18:51.738 fused_ordering(819) 00:18:51.738 fused_ordering(820) 00:18:52.307 fused_ordering(821) 00:18:52.308 fused_ordering(822) 00:18:52.308 fused_ordering(823) 00:18:52.308 fused_ordering(824) 00:18:52.308 fused_ordering(825) 00:18:52.308 fused_ordering(826) 00:18:52.308 fused_ordering(827) 00:18:52.308 fused_ordering(828) 00:18:52.308 fused_ordering(829) 00:18:52.308 fused_ordering(830) 00:18:52.308 fused_ordering(831) 00:18:52.308 fused_ordering(832) 00:18:52.308 fused_ordering(833) 00:18:52.308 fused_ordering(834) 00:18:52.308 fused_ordering(835) 00:18:52.308 fused_ordering(836) 00:18:52.308 fused_ordering(837) 00:18:52.308 fused_ordering(838) 00:18:52.308 fused_ordering(839) 00:18:52.308 fused_ordering(840) 00:18:52.308 fused_ordering(841) 00:18:52.308 fused_ordering(842) 00:18:52.308 fused_ordering(843) 00:18:52.308 fused_ordering(844) 00:18:52.308 fused_ordering(845) 00:18:52.308 fused_ordering(846) 00:18:52.308 fused_ordering(847) 00:18:52.308 fused_ordering(848) 00:18:52.308 fused_ordering(849) 00:18:52.308 fused_ordering(850) 00:18:52.308 fused_ordering(851) 00:18:52.308 fused_ordering(852) 00:18:52.308 fused_ordering(853) 00:18:52.308 fused_ordering(854) 00:18:52.308 fused_ordering(855) 00:18:52.308 fused_ordering(856) 00:18:52.308 fused_ordering(857) 00:18:52.308 fused_ordering(858) 00:18:52.308 fused_ordering(859) 00:18:52.308 fused_ordering(860) 00:18:52.308 fused_ordering(861) 00:18:52.308 fused_ordering(862) 00:18:52.308 fused_ordering(863) 00:18:52.308 fused_ordering(864) 00:18:52.308 fused_ordering(865) 00:18:52.308 fused_ordering(866) 00:18:52.308 fused_ordering(867) 00:18:52.308 fused_ordering(868) 00:18:52.308 fused_ordering(869) 00:18:52.308 fused_ordering(870) 00:18:52.308 fused_ordering(871) 00:18:52.308 fused_ordering(872) 00:18:52.308 fused_ordering(873) 00:18:52.308 fused_ordering(874) 00:18:52.308 fused_ordering(875) 00:18:52.308 fused_ordering(876) 00:18:52.308 fused_ordering(877) 00:18:52.308 fused_ordering(878) 00:18:52.308 fused_ordering(879) 00:18:52.308 fused_ordering(880) 00:18:52.308 fused_ordering(881) 00:18:52.308 fused_ordering(882) 00:18:52.308 fused_ordering(883) 00:18:52.308 fused_ordering(884) 00:18:52.308 fused_ordering(885) 00:18:52.308 fused_ordering(886) 00:18:52.308 fused_ordering(887) 00:18:52.308 fused_ordering(888) 00:18:52.308 fused_ordering(889) 00:18:52.308 fused_ordering(890) 00:18:52.308 fused_ordering(891) 00:18:52.308 fused_ordering(892) 00:18:52.308 fused_ordering(893) 00:18:52.308 fused_ordering(894) 00:18:52.308 fused_ordering(895) 00:18:52.308 fused_ordering(896) 00:18:52.308 fused_ordering(897) 00:18:52.308 fused_ordering(898) 00:18:52.308 fused_ordering(899) 00:18:52.308 fused_ordering(900) 00:18:52.308 fused_ordering(901) 00:18:52.308 fused_ordering(902) 00:18:52.308 fused_ordering(903) 00:18:52.308 fused_ordering(904) 00:18:52.308 fused_ordering(905) 00:18:52.308 fused_ordering(906) 00:18:52.308 fused_ordering(907) 00:18:52.308 fused_ordering(908) 00:18:52.308 fused_ordering(909) 00:18:52.308 fused_ordering(910) 00:18:52.308 fused_ordering(911) 00:18:52.308 fused_ordering(912) 00:18:52.308 fused_ordering(913) 00:18:52.308 fused_ordering(914) 00:18:52.308 fused_ordering(915) 00:18:52.308 fused_ordering(916) 00:18:52.308 fused_ordering(917) 00:18:52.308 fused_ordering(918) 00:18:52.308 fused_ordering(919) 00:18:52.308 fused_ordering(920) 00:18:52.308 fused_ordering(921) 00:18:52.308 fused_ordering(922) 00:18:52.308 fused_ordering(923) 00:18:52.308 fused_ordering(924) 00:18:52.308 fused_ordering(925) 00:18:52.308 fused_ordering(926) 00:18:52.308 fused_ordering(927) 00:18:52.308 fused_ordering(928) 00:18:52.308 fused_ordering(929) 00:18:52.308 fused_ordering(930) 00:18:52.308 fused_ordering(931) 00:18:52.308 fused_ordering(932) 00:18:52.308 fused_ordering(933) 00:18:52.308 fused_ordering(934) 00:18:52.308 fused_ordering(935) 00:18:52.308 fused_ordering(936) 00:18:52.308 fused_ordering(937) 00:18:52.308 fused_ordering(938) 00:18:52.308 fused_ordering(939) 00:18:52.308 fused_ordering(940) 00:18:52.308 fused_ordering(941) 00:18:52.308 fused_ordering(942) 00:18:52.308 fused_ordering(943) 00:18:52.308 fused_ordering(944) 00:18:52.308 fused_ordering(945) 00:18:52.308 fused_ordering(946) 00:18:52.308 fused_ordering(947) 00:18:52.308 fused_ordering(948) 00:18:52.308 fused_ordering(949) 00:18:52.308 fused_ordering(950) 00:18:52.308 fused_ordering(951) 00:18:52.308 fused_ordering(952) 00:18:52.308 fused_ordering(953) 00:18:52.308 fused_ordering(954) 00:18:52.308 fused_ordering(955) 00:18:52.308 fused_ordering(956) 00:18:52.308 fused_ordering(957) 00:18:52.308 fused_ordering(958) 00:18:52.308 fused_ordering(959) 00:18:52.308 fused_ordering(960) 00:18:52.308 fused_ordering(961) 00:18:52.308 fused_ordering(962) 00:18:52.308 fused_ordering(963) 00:18:52.308 fused_ordering(964) 00:18:52.308 fused_ordering(965) 00:18:52.308 fused_ordering(966) 00:18:52.308 fused_ordering(967) 00:18:52.308 fused_ordering(968) 00:18:52.308 fused_ordering(969) 00:18:52.308 fused_ordering(970) 00:18:52.308 fused_ordering(971) 00:18:52.308 fused_ordering(972) 00:18:52.308 fused_ordering(973) 00:18:52.308 fused_ordering(974) 00:18:52.308 fused_ordering(975) 00:18:52.308 fused_ordering(976) 00:18:52.308 fused_ordering(977) 00:18:52.308 fused_ordering(978) 00:18:52.308 fused_ordering(979) 00:18:52.308 fused_ordering(980) 00:18:52.308 fused_ordering(981) 00:18:52.308 fused_ordering(982) 00:18:52.308 fused_ordering(983) 00:18:52.308 fused_ordering(984) 00:18:52.308 fused_ordering(985) 00:18:52.308 fused_ordering(986) 00:18:52.308 fused_ordering(987) 00:18:52.308 fused_ordering(988) 00:18:52.308 fused_ordering(989) 00:18:52.308 fused_ordering(990) 00:18:52.308 fused_ordering(991) 00:18:52.308 fused_ordering(992) 00:18:52.308 fused_ordering(993) 00:18:52.308 fused_ordering(994) 00:18:52.308 fused_ordering(995) 00:18:52.308 fused_ordering(996) 00:18:52.308 fused_ordering(997) 00:18:52.308 fused_ordering(998) 00:18:52.308 fused_ordering(999) 00:18:52.308 fused_ordering(1000) 00:18:52.308 fused_ordering(1001) 00:18:52.308 fused_ordering(1002) 00:18:52.308 fused_ordering(1003) 00:18:52.308 fused_ordering(1004) 00:18:52.308 fused_ordering(1005) 00:18:52.308 fused_ordering(1006) 00:18:52.308 fused_ordering(1007) 00:18:52.308 fused_ordering(1008) 00:18:52.308 fused_ordering(1009) 00:18:52.308 fused_ordering(1010) 00:18:52.308 fused_ordering(1011) 00:18:52.308 fused_ordering(1012) 00:18:52.308 fused_ordering(1013) 00:18:52.308 fused_ordering(1014) 00:18:52.308 fused_ordering(1015) 00:18:52.308 fused_ordering(1016) 00:18:52.308 fused_ordering(1017) 00:18:52.308 fused_ordering(1018) 00:18:52.308 fused_ordering(1019) 00:18:52.308 fused_ordering(1020) 00:18:52.308 fused_ordering(1021) 00:18:52.308 fused_ordering(1022) 00:18:52.308 fused_ordering(1023) 00:18:52.308 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:52.308 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:52.308 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:52.308 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:52.308 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:52.308 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:52.308 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:52.308 00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:52.308 rmmod nvme_tcp 00:18:52.308 rmmod nvme_fabrics 00:18:52.308 rmmod nvme_keyring 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 319525 ']' 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 319525 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 319525 ']' 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 319525 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 319525 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 319525' 00:18:52.308 killing process with pid 319525 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 319525 00:18:52.308 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 319525 00:18:52.568 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:52.568 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:52.568 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:52.568 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:52.569 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:52.569 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:52.569 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:52.569 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:52.569 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:52.569 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.569 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.569 00:00:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.475 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:54.475 00:18:54.475 real 0m10.537s 00:18:54.475 user 0m5.052s 00:18:54.475 sys 0m5.532s 00:18:54.475 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.475 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:54.475 ************************************ 00:18:54.475 END TEST nvmf_fused_ordering 00:18:54.475 ************************************ 00:18:54.475 00:00:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:54.475 00:00:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.475 00:00:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.475 00:00:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:54.475 ************************************ 00:18:54.475 START TEST nvmf_ns_masking 00:18:54.475 ************************************ 00:18:54.475 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:54.736 * Looking for test storage... 00:18:54.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:54.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.736 --rc genhtml_branch_coverage=1 00:18:54.736 --rc genhtml_function_coverage=1 00:18:54.736 --rc genhtml_legend=1 00:18:54.736 --rc geninfo_all_blocks=1 00:18:54.736 --rc geninfo_unexecuted_blocks=1 00:18:54.736 00:18:54.736 ' 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:54.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.736 --rc genhtml_branch_coverage=1 00:18:54.736 --rc genhtml_function_coverage=1 00:18:54.736 --rc genhtml_legend=1 00:18:54.736 --rc geninfo_all_blocks=1 00:18:54.736 --rc geninfo_unexecuted_blocks=1 00:18:54.736 00:18:54.736 ' 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:54.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.736 --rc genhtml_branch_coverage=1 00:18:54.736 --rc genhtml_function_coverage=1 00:18:54.736 --rc genhtml_legend=1 00:18:54.736 --rc geninfo_all_blocks=1 00:18:54.736 --rc geninfo_unexecuted_blocks=1 00:18:54.736 00:18:54.736 ' 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:54.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.736 --rc genhtml_branch_coverage=1 00:18:54.736 --rc genhtml_function_coverage=1 00:18:54.736 --rc genhtml_legend=1 00:18:54.736 --rc geninfo_all_blocks=1 00:18:54.736 --rc geninfo_unexecuted_blocks=1 00:18:54.736 00:18:54.736 ' 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:54.736 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:54.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3635335c-d877-4a42-8a3e-7b6a0a054978 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2482d4fe-1adc-482a-9e26-8526f7f46809 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0328adfa-086b-4d13-aaad-393a9fc713aa 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:54.737 00:00:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:01.331 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:01.331 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.331 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:01.332 Found net devices under 0000:86:00.0: cvl_0_0 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:01.332 Found net devices under 0000:86:00.1: cvl_0_1 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:01.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.446 ms 00:19:01.332 00:19:01.332 --- 10.0.0.2 ping statistics --- 00:19:01.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.332 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:19:01.332 00:19:01.332 --- 10.0.0.1 ping statistics --- 00:19:01.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.332 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=323400 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 323400 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 323400 ']' 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:01.332 [2024-12-10 00:00:35.621463] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:19:01.332 [2024-12-10 00:00:35.621513] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.332 [2024-12-10 00:00:35.703520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.332 [2024-12-10 00:00:35.744199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.332 [2024-12-10 00:00:35.744235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.332 [2024-12-10 00:00:35.744242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.332 [2024-12-10 00:00:35.744248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.332 [2024-12-10 00:00:35.744254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.332 [2024-12-10 00:00:35.744811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.332 00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:01.332 [2024-12-10 00:00:36.050122] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.332 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:01.332 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:01.332 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:01.592 Malloc1 00:19:01.592 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:01.592 Malloc2 00:19:01.592 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:01.851 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:02.109 00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.368 [2024-12-10 00:00:37.096485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.368 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:02.368 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0328adfa-086b-4d13-aaad-393a9fc713aa -a 10.0.0.2 -s 4420 -i 4 00:19:02.628 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:02.628 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:02.628 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.628 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:02.628 00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:04.534 [ 0]:0x1 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:04.534 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:04.794 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2030d2185cd04a57bdda7881931b6960 00:19:04.794 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2030d2185cd04a57bdda7881931b6960 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:04.795 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:04.795 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:04.795 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:04.795 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:05.054 [ 0]:0x1 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2030d2185cd04a57bdda7881931b6960 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2030d2185cd04a57bdda7881931b6960 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:05.054 [ 1]:0x2 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd084ce271b4d19807f896c9f18c914 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd084ce271b4d19807f896c9f18c914 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:05.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:05.054 00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:05.313 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:05.572 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:05.572 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0328adfa-086b-4d13-aaad-393a9fc713aa -a 10.0.0.2 -s 4420 -i 4 00:19:05.572 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:05.572 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:05.572 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.572 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:19:05.572 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:19:05.572 00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:08.114 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:08.115 [ 0]:0x2 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd084ce271b4d19807f896c9f18c914 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd084ce271b4d19807f896c9f18c914 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:08.115 [ 0]:0x1 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2030d2185cd04a57bdda7881931b6960 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2030d2185cd04a57bdda7881931b6960 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:08.115 [ 1]:0x2 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd084ce271b4d19807f896c9f18c914 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd084ce271b4d19807f896c9f18c914 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.115 00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:08.374 [ 0]:0x2 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd084ce271b4d19807f896c9f18c914 00:19:08.374 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd084ce271b4d19807f896c9f18c914 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:08.375 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:08.375 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:08.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:08.634 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:08.634 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:08.634 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0328adfa-086b-4d13-aaad-393a9fc713aa -a 10.0.0.2 -s 4420 -i 4 00:19:08.893 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:08.893 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:08.893 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:08.893 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:08.893 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:08.893 00:00:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:11.437 [ 0]:0x1 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2030d2185cd04a57bdda7881931b6960 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2030d2185cd04a57bdda7881931b6960 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:11.437 [ 1]:0x2 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd084ce271b4d19807f896c9f18c914 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd084ce271b4d19807f896c9f18c914 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.437 00:00:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:11.437 [ 0]:0x2 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd084ce271b4d19807f896c9f18c914 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd084ce271b4d19807f896c9f18c914 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:19:11.437 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:11.700 [2024-12-10 00:00:46.407109] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:11.700 request: 00:19:11.700 { 00:19:11.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.700 "nsid": 2, 00:19:11.700 "host": "nqn.2016-06.io.spdk:host1", 00:19:11.700 "method": "nvmf_ns_remove_host", 00:19:11.700 "req_id": 1 00:19:11.700 } 00:19:11.700 Got JSON-RPC error response 00:19:11.700 response: 00:19:11.700 { 00:19:11.700 "code": -32602, 00:19:11.700 "message": "Invalid parameters" 00:19:11.700 } 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:11.700 [ 0]:0x2 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=efd084ce271b4d19807f896c9f18c914 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ efd084ce271b4d19807f896c9f18c914 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:11.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=325311 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 325311 /var/tmp/host.sock 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 325311 ']' 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:11.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.700 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:11.962 [2024-12-10 00:00:46.636370] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:19:11.962 [2024-12-10 00:00:46.636416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325311 ] 00:19:11.962 [2024-12-10 00:00:46.712536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.962 [2024-12-10 00:00:46.752686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.222 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.222 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:12.222 00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:12.481 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:12.481 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3635335c-d877-4a42-8a3e-7b6a0a054978 00:19:12.481 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:12.481 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3635335CD8774A428A3E7B6A0A054978 -i 00:19:12.740 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2482d4fe-1adc-482a-9e26-8526f7f46809 00:19:12.740 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:12.740 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2482D4FE1ADC482A9E268526F7F46809 -i 00:19:12.999 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:13.259 00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:13.259 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:13.259 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:13.827 nvme0n1 00:19:13.827 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:13.827 00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:14.087 nvme1n2 00:19:14.087 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:14.087 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:14.087 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:14.087 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:14.087 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:14.350 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:14.350 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:14.350 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:14.350 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:14.611 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3635335c-d877-4a42-8a3e-7b6a0a054978 == \3\6\3\5\3\3\5\c\-\d\8\7\7\-\4\a\4\2\-\8\a\3\e\-\7\b\6\a\0\a\0\5\4\9\7\8 ]] 00:19:14.611 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:14.611 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:14.611 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:14.871 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2482d4fe-1adc-482a-9e26-8526f7f46809 == \2\4\8\2\d\4\f\e\-\1\a\d\c\-\4\8\2\a\-\9\e\2\6\-\8\5\2\6\f\7\f\4\6\8\0\9 ]] 00:19:14.871 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.130 00:00:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 3635335c-d877-4a42-8a3e-7b6a0a054978 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3635335CD8774A428A3E7B6A0A054978 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3635335CD8774A428A3E7B6A0A054978 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:15.130 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:19:15.131 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3635335CD8774A428A3E7B6A0A054978 00:19:15.389 [2024-12-10 00:00:50.217857] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:15.390 [2024-12-10 00:00:50.217890] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:15.390 [2024-12-10 00:00:50.217901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:15.390 request: 00:19:15.390 { 00:19:15.390 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.390 "namespace": { 00:19:15.390 "bdev_name": "invalid", 00:19:15.390 "nsid": 1, 00:19:15.390 "nguid": "3635335CD8774A428A3E7B6A0A054978", 00:19:15.390 "no_auto_visible": false, 00:19:15.390 "hide_metadata": false 00:19:15.390 }, 00:19:15.390 "method": "nvmf_subsystem_add_ns", 00:19:15.390 "req_id": 1 00:19:15.390 } 00:19:15.390 Got JSON-RPC error response 00:19:15.390 response: 00:19:15.390 { 00:19:15.390 "code": -32602, 00:19:15.390 "message": "Invalid parameters" 00:19:15.390 } 00:19:15.390 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:15.390 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:15.390 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:15.390 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:15.390 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 3635335c-d877-4a42-8a3e-7b6a0a054978 00:19:15.390 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:15.390 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3635335CD8774A428A3E7B6A0A054978 -i 00:19:15.650 00:00:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:17.556 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:17.556 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:17.556 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 325311 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 325311 ']' 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 325311 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 325311 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 325311' 00:19:17.815 killing process with pid 325311 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 325311 00:19:17.815 00:00:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 325311 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:18.384 rmmod nvme_tcp 00:19:18.384 rmmod nvme_fabrics 00:19:18.384 rmmod nvme_keyring 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 323400 ']' 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 323400 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 323400 ']' 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 323400 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.384 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 323400 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 323400' 00:19:18.643 killing process with pid 323400 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 323400 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 323400 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.643 00:00:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:21.183 00:19:21.183 real 0m26.213s 00:19:21.183 user 0m31.607s 00:19:21.183 sys 0m7.098s 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:21.183 ************************************ 00:19:21.183 END TEST nvmf_ns_masking 00:19:21.183 ************************************ 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:21.183 ************************************ 00:19:21.183 START TEST nvmf_nvme_cli 00:19:21.183 ************************************ 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:21.183 * Looking for test storage... 00:19:21.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:21.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.183 --rc genhtml_branch_coverage=1 00:19:21.183 --rc genhtml_function_coverage=1 00:19:21.183 --rc genhtml_legend=1 00:19:21.183 --rc geninfo_all_blocks=1 00:19:21.183 --rc geninfo_unexecuted_blocks=1 00:19:21.183 00:19:21.183 ' 00:19:21.183 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:21.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.183 --rc genhtml_branch_coverage=1 00:19:21.183 --rc genhtml_function_coverage=1 00:19:21.183 --rc genhtml_legend=1 00:19:21.183 --rc geninfo_all_blocks=1 00:19:21.183 --rc geninfo_unexecuted_blocks=1 00:19:21.183 00:19:21.184 ' 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:21.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.184 --rc genhtml_branch_coverage=1 00:19:21.184 --rc genhtml_function_coverage=1 00:19:21.184 --rc genhtml_legend=1 00:19:21.184 --rc geninfo_all_blocks=1 00:19:21.184 --rc geninfo_unexecuted_blocks=1 00:19:21.184 00:19:21.184 ' 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:21.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.184 --rc genhtml_branch_coverage=1 00:19:21.184 --rc genhtml_function_coverage=1 00:19:21.184 --rc genhtml_legend=1 00:19:21.184 --rc geninfo_all_blocks=1 00:19:21.184 --rc geninfo_unexecuted_blocks=1 00:19:21.184 00:19:21.184 ' 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:21.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:21.184 00:00:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:27.778 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:27.778 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:27.778 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:27.779 Found net devices under 0000:86:00.0: cvl_0_0 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:27.779 Found net devices under 0000:86:00.1: cvl_0_1 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:27.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:19:27.779 00:19:27.779 --- 10.0.0.2 ping statistics --- 00:19:27.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.779 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:27.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:19:27.779 00:19:27.779 --- 10.0.0.1 ping statistics --- 00:19:27.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.779 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=330030 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 330030 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 330030 ']' 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.779 00:01:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:27.779 [2024-12-10 00:01:01.844768] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:19:27.779 [2024-12-10 00:01:01.844809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.779 [2024-12-10 00:01:01.926424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:27.779 [2024-12-10 00:01:01.966682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.779 [2024-12-10 00:01:01.966724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.779 [2024-12-10 00:01:01.966734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.779 [2024-12-10 00:01:01.966742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.779 [2024-12-10 00:01:01.966748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.779 [2024-12-10 00:01:01.968262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.779 [2024-12-10 00:01:01.968361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.779 [2024-12-10 00:01:01.968467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:27.779 [2024-12-10 00:01:01.968468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.779 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.779 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:27.779 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.779 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.779 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.043 [2024-12-10 00:01:02.724516] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.043 Malloc0 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.043 Malloc1 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.043 [2024-12-10 00:01:02.815153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:19:28.043 00:19:28.043 Discovery Log Number of Records 2, Generation counter 2 00:19:28.043 =====Discovery Log Entry 0====== 00:19:28.043 trtype: tcp 00:19:28.043 adrfam: ipv4 00:19:28.043 subtype: current discovery subsystem 00:19:28.043 treq: not required 00:19:28.043 portid: 0 00:19:28.043 trsvcid: 4420 00:19:28.043 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:28.043 traddr: 10.0.0.2 00:19:28.043 eflags: explicit discovery connections, duplicate discovery information 00:19:28.043 sectype: none 00:19:28.043 =====Discovery Log Entry 1====== 00:19:28.043 trtype: tcp 00:19:28.043 adrfam: ipv4 00:19:28.043 subtype: nvme subsystem 00:19:28.043 treq: not required 00:19:28.043 portid: 0 00:19:28.043 trsvcid: 4420 00:19:28.043 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:28.043 traddr: 10.0.0.2 00:19:28.043 eflags: none 00:19:28.043 sectype: none 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.043 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:28.310 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:28.310 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.310 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:28.310 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:28.310 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:28.310 00:01:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:29.284 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:29.284 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:29.284 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:29.284 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:29.284 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:29.284 00:01:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:31.270 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:31.560 /dev/nvme0n2 ]] 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:31.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.560 rmmod nvme_tcp 00:19:31.560 rmmod nvme_fabrics 00:19:31.560 rmmod nvme_keyring 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 330030 ']' 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 330030 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 330030 ']' 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 330030 00:19:31.560 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:31.561 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.561 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330030 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330030' 00:19:31.836 killing process with pid 330030 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 330030 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 330030 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.836 00:01:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:34.468 00:19:34.468 real 0m13.098s 00:19:34.468 user 0m20.797s 00:19:34.468 sys 0m5.033s 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:34.468 ************************************ 00:19:34.468 END TEST nvmf_nvme_cli 00:19:34.468 ************************************ 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.468 ************************************ 00:19:34.468 START TEST nvmf_vfio_user 00:19:34.468 ************************************ 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:34.468 * Looking for test storage... 00:19:34.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:19:34.468 00:01:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:34.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.468 --rc genhtml_branch_coverage=1 00:19:34.468 --rc genhtml_function_coverage=1 00:19:34.468 --rc genhtml_legend=1 00:19:34.468 --rc geninfo_all_blocks=1 00:19:34.468 --rc geninfo_unexecuted_blocks=1 00:19:34.468 00:19:34.468 ' 00:19:34.468 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:34.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.468 --rc genhtml_branch_coverage=1 00:19:34.468 --rc genhtml_function_coverage=1 00:19:34.468 --rc genhtml_legend=1 00:19:34.468 --rc geninfo_all_blocks=1 00:19:34.468 --rc geninfo_unexecuted_blocks=1 00:19:34.468 00:19:34.468 ' 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:34.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.469 --rc genhtml_branch_coverage=1 00:19:34.469 --rc genhtml_function_coverage=1 00:19:34.469 --rc genhtml_legend=1 00:19:34.469 --rc geninfo_all_blocks=1 00:19:34.469 --rc geninfo_unexecuted_blocks=1 00:19:34.469 00:19:34.469 ' 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:34.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.469 --rc genhtml_branch_coverage=1 00:19:34.469 --rc genhtml_function_coverage=1 00:19:34.469 --rc genhtml_legend=1 00:19:34.469 --rc geninfo_all_blocks=1 00:19:34.469 --rc geninfo_unexecuted_blocks=1 00:19:34.469 00:19:34.469 ' 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=331345 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 331345' 00:19:34.469 Process pid: 331345 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 331345 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 331345 ']' 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.469 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:34.469 [2024-12-10 00:01:09.121572] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:19:34.469 [2024-12-10 00:01:09.121617] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.469 [2024-12-10 00:01:09.197906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:34.469 [2024-12-10 00:01:09.239907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.469 [2024-12-10 00:01:09.239941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.469 [2024-12-10 00:01:09.239948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.469 [2024-12-10 00:01:09.239954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.469 [2024-12-10 00:01:09.239959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.469 [2024-12-10 00:01:09.241524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.470 [2024-12-10 00:01:09.241630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.470 [2024-12-10 00:01:09.241736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.470 [2024-12-10 00:01:09.241737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:34.470 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.470 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:34.470 00:01:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:35.465 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:35.733 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:35.733 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:35.733 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:35.733 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:35.733 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:36.040 Malloc1 00:19:36.040 00:01:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:36.312 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:36.312 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:36.582 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:36.582 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:36.582 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:36.872 Malloc2 00:19:36.872 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:37.159 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:37.159 00:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:37.445 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:37.445 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:37.445 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:37.445 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:37.445 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:37.445 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:37.445 [2024-12-10 00:01:12.234909] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:19:37.445 [2024-12-10 00:01:12.234942] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid331972 ] 00:19:37.445 [2024-12-10 00:01:12.276121] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:37.445 [2024-12-10 00:01:12.288434] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:37.445 [2024-12-10 00:01:12.288459] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd34d885000 00:19:37.445 [2024-12-10 00:01:12.289434] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:37.445 [2024-12-10 00:01:12.290435] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:37.445 [2024-12-10 00:01:12.291441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:37.445 [2024-12-10 00:01:12.292446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:37.445 [2024-12-10 00:01:12.293454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:37.445 [2024-12-10 00:01:12.294456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:37.445 [2024-12-10 00:01:12.295466] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:37.445 [2024-12-10 00:01:12.296465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:37.445 [2024-12-10 00:01:12.297475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:37.445 [2024-12-10 00:01:12.297484] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd34d87a000 00:19:37.445 [2024-12-10 00:01:12.298430] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:37.445 [2024-12-10 00:01:12.309042] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:37.445 [2024-12-10 00:01:12.309069] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:19:37.445 [2024-12-10 00:01:12.313571] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:37.445 [2024-12-10 00:01:12.313611] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:37.445 [2024-12-10 00:01:12.313687] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:19:37.445 [2024-12-10 00:01:12.313702] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:19:37.445 [2024-12-10 00:01:12.313708] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:19:37.445 [2024-12-10 00:01:12.314575] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:37.445 [2024-12-10 00:01:12.314587] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:19:37.445 [2024-12-10 00:01:12.314594] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:19:37.445 [2024-12-10 00:01:12.315578] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:37.445 [2024-12-10 00:01:12.315585] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:19:37.445 [2024-12-10 00:01:12.315592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:37.445 [2024-12-10 00:01:12.316581] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:37.445 [2024-12-10 00:01:12.316590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:37.445 [2024-12-10 00:01:12.317585] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:37.445 [2024-12-10 00:01:12.317595] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:37.445 [2024-12-10 00:01:12.317600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:37.445 [2024-12-10 00:01:12.317607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:37.445 [2024-12-10 00:01:12.317715] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:19:37.445 [2024-12-10 00:01:12.317719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:37.446 [2024-12-10 00:01:12.317724] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:37.446 [2024-12-10 00:01:12.318591] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:37.446 [2024-12-10 00:01:12.319600] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:37.446 [2024-12-10 00:01:12.320605] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:37.446 [2024-12-10 00:01:12.321604] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:37.446 [2024-12-10 00:01:12.321678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:37.446 [2024-12-10 00:01:12.322615] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:37.446 [2024-12-10 00:01:12.322623] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:37.446 [2024-12-10 00:01:12.322627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.322645] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:19:37.446 [2024-12-10 00:01:12.322651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.322670] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:37.446 [2024-12-10 00:01:12.322675] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:37.446 [2024-12-10 00:01:12.322679] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:37.446 [2024-12-10 00:01:12.322692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:37.446 [2024-12-10 00:01:12.322734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:37.446 [2024-12-10 00:01:12.322744] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:19:37.446 [2024-12-10 00:01:12.322749] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:19:37.446 [2024-12-10 00:01:12.322753] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:19:37.446 [2024-12-10 00:01:12.322757] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:37.446 [2024-12-10 00:01:12.322762] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:19:37.446 [2024-12-10 00:01:12.322768] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:19:37.446 [2024-12-10 00:01:12.322773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.322780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.322789] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:37.446 [2024-12-10 00:01:12.322799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:37.446 [2024-12-10 00:01:12.322811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.446 [2024-12-10 00:01:12.322819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.446 [2024-12-10 00:01:12.322826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.446 [2024-12-10 00:01:12.322833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.446 [2024-12-10 00:01:12.322838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.322845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.322854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:37.446 [2024-12-10 00:01:12.322867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:37.446 [2024-12-10 00:01:12.322873] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:19:37.446 [2024-12-10 00:01:12.322878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.322885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.322891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.322899] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:37.446 [2024-12-10 00:01:12.322911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:37.446 [2024-12-10 00:01:12.322961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.322969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.322977] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:37.446 [2024-12-10 00:01:12.322981] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:37.446 [2024-12-10 00:01:12.322984] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:37.446 [2024-12-10 00:01:12.322992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:37.446 [2024-12-10 00:01:12.323002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:37.446 [2024-12-10 00:01:12.323013] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:19:37.446 [2024-12-10 00:01:12.323024] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.323031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.323037] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:37.446 [2024-12-10 00:01:12.323041] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:37.446 [2024-12-10 00:01:12.323044] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:37.446 [2024-12-10 00:01:12.323050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:37.446 [2024-12-10 00:01:12.323077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:37.446 [2024-12-10 00:01:12.323086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.323093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.323099] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:37.446 [2024-12-10 00:01:12.323103] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:37.446 [2024-12-10 00:01:12.323106] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:37.446 [2024-12-10 00:01:12.323112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:37.446 [2024-12-10 00:01:12.323125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:37.446 [2024-12-10 00:01:12.323134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.323140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.323146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.323152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.323163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.323169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.323173] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:37.446 [2024-12-10 00:01:12.323177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:19:37.446 [2024-12-10 00:01:12.323182] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:19:37.446 [2024-12-10 00:01:12.323200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:37.446 [2024-12-10 00:01:12.323210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:37.446 [2024-12-10 00:01:12.323220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:37.446 [2024-12-10 00:01:12.323231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:37.446 [2024-12-10 00:01:12.323241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:37.446 [2024-12-10 00:01:12.323249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:37.446 [2024-12-10 00:01:12.323259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:37.446 [2024-12-10 00:01:12.323269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:37.446 [2024-12-10 00:01:12.323283] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:37.446 [2024-12-10 00:01:12.323288] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:37.446 [2024-12-10 00:01:12.323291] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:37.447 [2024-12-10 00:01:12.323294] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:37.447 [2024-12-10 00:01:12.323297] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:37.447 [2024-12-10 00:01:12.323302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:37.447 [2024-12-10 00:01:12.323309] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:37.447 [2024-12-10 00:01:12.323313] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:37.447 [2024-12-10 00:01:12.323316] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:37.447 [2024-12-10 00:01:12.323321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:37.447 [2024-12-10 00:01:12.323327] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:37.447 [2024-12-10 00:01:12.323331] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:37.447 [2024-12-10 00:01:12.323334] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:37.447 [2024-12-10 00:01:12.323340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:37.447 [2024-12-10 00:01:12.323346] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:37.447 [2024-12-10 00:01:12.323350] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:37.447 [2024-12-10 00:01:12.323353] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:37.447 [2024-12-10 00:01:12.323359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:37.447 [2024-12-10 00:01:12.323365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:37.447 [2024-12-10 00:01:12.323375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:37.447 [2024-12-10 00:01:12.323386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:37.447 [2024-12-10 00:01:12.323393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:37.447 ===================================================== 00:19:37.447 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:37.447 ===================================================== 00:19:37.447 Controller Capabilities/Features 00:19:37.447 ================================ 00:19:37.447 Vendor ID: 4e58 00:19:37.447 Subsystem Vendor ID: 4e58 00:19:37.447 Serial Number: SPDK1 00:19:37.447 Model Number: SPDK bdev Controller 00:19:37.447 Firmware Version: 25.01 00:19:37.447 Recommended Arb Burst: 6 00:19:37.447 IEEE OUI Identifier: 8d 6b 50 00:19:37.447 Multi-path I/O 00:19:37.447 May have multiple subsystem ports: Yes 00:19:37.447 May have multiple controllers: Yes 00:19:37.447 Associated with SR-IOV VF: No 00:19:37.447 Max Data Transfer Size: 131072 00:19:37.447 Max Number of Namespaces: 32 00:19:37.447 Max Number of I/O Queues: 127 00:19:37.447 NVMe Specification Version (VS): 1.3 00:19:37.447 NVMe Specification Version (Identify): 1.3 00:19:37.447 Maximum Queue Entries: 256 00:19:37.447 Contiguous Queues Required: Yes 00:19:37.447 Arbitration Mechanisms Supported 00:19:37.447 Weighted Round Robin: Not Supported 00:19:37.447 Vendor Specific: Not Supported 00:19:37.447 Reset Timeout: 15000 ms 00:19:37.447 Doorbell Stride: 4 bytes 00:19:37.447 NVM Subsystem Reset: Not Supported 00:19:37.447 Command Sets Supported 00:19:37.447 NVM Command Set: Supported 00:19:37.447 Boot Partition: Not Supported 00:19:37.447 Memory Page Size Minimum: 4096 bytes 00:19:37.447 Memory Page Size Maximum: 4096 bytes 00:19:37.447 Persistent Memory Region: Not Supported 00:19:37.447 Optional Asynchronous Events Supported 00:19:37.447 Namespace Attribute Notices: Supported 00:19:37.447 Firmware Activation Notices: Not Supported 00:19:37.447 ANA Change Notices: Not Supported 00:19:37.447 PLE Aggregate Log Change Notices: Not Supported 00:19:37.447 LBA Status Info Alert Notices: Not Supported 00:19:37.447 EGE Aggregate Log Change Notices: Not Supported 00:19:37.447 Normal NVM Subsystem Shutdown event: Not Supported 00:19:37.447 Zone Descriptor Change Notices: Not Supported 00:19:37.447 Discovery Log Change Notices: Not Supported 00:19:37.447 Controller Attributes 00:19:37.447 128-bit Host Identifier: Supported 00:19:37.447 Non-Operational Permissive Mode: Not Supported 00:19:37.447 NVM Sets: Not Supported 00:19:37.447 Read Recovery Levels: Not Supported 00:19:37.447 Endurance Groups: Not Supported 00:19:37.447 Predictable Latency Mode: Not Supported 00:19:37.447 Traffic Based Keep ALive: Not Supported 00:19:37.447 Namespace Granularity: Not Supported 00:19:37.447 SQ Associations: Not Supported 00:19:37.447 UUID List: Not Supported 00:19:37.447 Multi-Domain Subsystem: Not Supported 00:19:37.447 Fixed Capacity Management: Not Supported 00:19:37.447 Variable Capacity Management: Not Supported 00:19:37.447 Delete Endurance Group: Not Supported 00:19:37.447 Delete NVM Set: Not Supported 00:19:37.447 Extended LBA Formats Supported: Not Supported 00:19:37.447 Flexible Data Placement Supported: Not Supported 00:19:37.447 00:19:37.447 Controller Memory Buffer Support 00:19:37.447 ================================ 00:19:37.447 Supported: No 00:19:37.447 00:19:37.447 Persistent Memory Region Support 00:19:37.447 ================================ 00:19:37.447 Supported: No 00:19:37.447 00:19:37.447 Admin Command Set Attributes 00:19:37.447 ============================ 00:19:37.447 Security Send/Receive: Not Supported 00:19:37.447 Format NVM: Not Supported 00:19:37.447 Firmware Activate/Download: Not Supported 00:19:37.447 Namespace Management: Not Supported 00:19:37.447 Device Self-Test: Not Supported 00:19:37.447 Directives: Not Supported 00:19:37.447 NVMe-MI: Not Supported 00:19:37.447 Virtualization Management: Not Supported 00:19:37.447 Doorbell Buffer Config: Not Supported 00:19:37.447 Get LBA Status Capability: Not Supported 00:19:37.447 Command & Feature Lockdown Capability: Not Supported 00:19:37.447 Abort Command Limit: 4 00:19:37.447 Async Event Request Limit: 4 00:19:37.447 Number of Firmware Slots: N/A 00:19:37.447 Firmware Slot 1 Read-Only: N/A 00:19:37.447 Firmware Activation Without Reset: N/A 00:19:37.447 Multiple Update Detection Support: N/A 00:19:37.447 Firmware Update Granularity: No Information Provided 00:19:37.447 Per-Namespace SMART Log: No 00:19:37.447 Asymmetric Namespace Access Log Page: Not Supported 00:19:37.447 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:37.447 Command Effects Log Page: Supported 00:19:37.447 Get Log Page Extended Data: Supported 00:19:37.447 Telemetry Log Pages: Not Supported 00:19:37.447 Persistent Event Log Pages: Not Supported 00:19:37.447 Supported Log Pages Log Page: May Support 00:19:37.447 Commands Supported & Effects Log Page: Not Supported 00:19:37.447 Feature Identifiers & Effects Log Page:May Support 00:19:37.447 NVMe-MI Commands & Effects Log Page: May Support 00:19:37.447 Data Area 4 for Telemetry Log: Not Supported 00:19:37.447 Error Log Page Entries Supported: 128 00:19:37.447 Keep Alive: Supported 00:19:37.447 Keep Alive Granularity: 10000 ms 00:19:37.447 00:19:37.447 NVM Command Set Attributes 00:19:37.447 ========================== 00:19:37.447 Submission Queue Entry Size 00:19:37.447 Max: 64 00:19:37.447 Min: 64 00:19:37.447 Completion Queue Entry Size 00:19:37.447 Max: 16 00:19:37.447 Min: 16 00:19:37.447 Number of Namespaces: 32 00:19:37.447 Compare Command: Supported 00:19:37.447 Write Uncorrectable Command: Not Supported 00:19:37.447 Dataset Management Command: Supported 00:19:37.447 Write Zeroes Command: Supported 00:19:37.447 Set Features Save Field: Not Supported 00:19:37.447 Reservations: Not Supported 00:19:37.447 Timestamp: Not Supported 00:19:37.447 Copy: Supported 00:19:37.447 Volatile Write Cache: Present 00:19:37.447 Atomic Write Unit (Normal): 1 00:19:37.447 Atomic Write Unit (PFail): 1 00:19:37.447 Atomic Compare & Write Unit: 1 00:19:37.447 Fused Compare & Write: Supported 00:19:37.447 Scatter-Gather List 00:19:37.447 SGL Command Set: Supported (Dword aligned) 00:19:37.447 SGL Keyed: Not Supported 00:19:37.447 SGL Bit Bucket Descriptor: Not Supported 00:19:37.447 SGL Metadata Pointer: Not Supported 00:19:37.447 Oversized SGL: Not Supported 00:19:37.447 SGL Metadata Address: Not Supported 00:19:37.447 SGL Offset: Not Supported 00:19:37.447 Transport SGL Data Block: Not Supported 00:19:37.447 Replay Protected Memory Block: Not Supported 00:19:37.447 00:19:37.447 Firmware Slot Information 00:19:37.447 ========================= 00:19:37.447 Active slot: 1 00:19:37.447 Slot 1 Firmware Revision: 25.01 00:19:37.447 00:19:37.447 00:19:37.447 Commands Supported and Effects 00:19:37.447 ============================== 00:19:37.447 Admin Commands 00:19:37.447 -------------- 00:19:37.447 Get Log Page (02h): Supported 00:19:37.447 Identify (06h): Supported 00:19:37.447 Abort (08h): Supported 00:19:37.447 Set Features (09h): Supported 00:19:37.447 Get Features (0Ah): Supported 00:19:37.447 Asynchronous Event Request (0Ch): Supported 00:19:37.447 Keep Alive (18h): Supported 00:19:37.447 I/O Commands 00:19:37.447 ------------ 00:19:37.448 Flush (00h): Supported LBA-Change 00:19:37.448 Write (01h): Supported LBA-Change 00:19:37.448 Read (02h): Supported 00:19:37.448 Compare (05h): Supported 00:19:37.448 Write Zeroes (08h): Supported LBA-Change 00:19:37.448 Dataset Management (09h): Supported LBA-Change 00:19:37.448 Copy (19h): Supported LBA-Change 00:19:37.448 00:19:37.448 Error Log 00:19:37.448 ========= 00:19:37.448 00:19:37.448 Arbitration 00:19:37.448 =========== 00:19:37.448 Arbitration Burst: 1 00:19:37.448 00:19:37.448 Power Management 00:19:37.448 ================ 00:19:37.448 Number of Power States: 1 00:19:37.448 Current Power State: Power State #0 00:19:37.448 Power State #0: 00:19:37.448 Max Power: 0.00 W 00:19:37.448 Non-Operational State: Operational 00:19:37.448 Entry Latency: Not Reported 00:19:37.448 Exit Latency: Not Reported 00:19:37.448 Relative Read Throughput: 0 00:19:37.448 Relative Read Latency: 0 00:19:37.448 Relative Write Throughput: 0 00:19:37.448 Relative Write Latency: 0 00:19:37.448 Idle Power: Not Reported 00:19:37.448 Active Power: Not Reported 00:19:37.448 Non-Operational Permissive Mode: Not Supported 00:19:37.448 00:19:37.448 Health Information 00:19:37.448 ================== 00:19:37.448 Critical Warnings: 00:19:37.448 Available Spare Space: OK 00:19:37.448 Temperature: OK 00:19:37.448 Device Reliability: OK 00:19:37.448 Read Only: No 00:19:37.448 Volatile Memory Backup: OK 00:19:37.448 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:37.448 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:37.448 Available Spare: 0% 00:19:37.448 Available Sp[2024-12-10 00:01:12.323477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:37.448 [2024-12-10 00:01:12.323491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:37.448 [2024-12-10 00:01:12.323518] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:19:37.448 [2024-12-10 00:01:12.323527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.448 [2024-12-10 00:01:12.323533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.448 [2024-12-10 00:01:12.323538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.448 [2024-12-10 00:01:12.323543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.448 [2024-12-10 00:01:12.323621] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:37.448 [2024-12-10 00:01:12.323631] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:37.448 [2024-12-10 00:01:12.324627] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:37.448 [2024-12-10 00:01:12.324676] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:19:37.448 [2024-12-10 00:01:12.324682] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:19:37.448 [2024-12-10 00:01:12.325629] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:37.448 [2024-12-10 00:01:12.325641] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:19:37.448 [2024-12-10 00:01:12.325690] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:37.448 [2024-12-10 00:01:12.328169] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:37.448 are Threshold: 0% 00:19:37.448 Life Percentage Used: 0% 00:19:37.448 Data Units Read: 0 00:19:37.448 Data Units Written: 0 00:19:37.448 Host Read Commands: 0 00:19:37.448 Host Write Commands: 0 00:19:37.448 Controller Busy Time: 0 minutes 00:19:37.448 Power Cycles: 0 00:19:37.448 Power On Hours: 0 hours 00:19:37.448 Unsafe Shutdowns: 0 00:19:37.448 Unrecoverable Media Errors: 0 00:19:37.448 Lifetime Error Log Entries: 0 00:19:37.448 Warning Temperature Time: 0 minutes 00:19:37.448 Critical Temperature Time: 0 minutes 00:19:37.448 00:19:37.448 Number of Queues 00:19:37.448 ================ 00:19:37.448 Number of I/O Submission Queues: 127 00:19:37.448 Number of I/O Completion Queues: 127 00:19:37.448 00:19:37.448 Active Namespaces 00:19:37.448 ================= 00:19:37.448 Namespace ID:1 00:19:37.448 Error Recovery Timeout: Unlimited 00:19:37.448 Command Set Identifier: NVM (00h) 00:19:37.448 Deallocate: Supported 00:19:37.448 Deallocated/Unwritten Error: Not Supported 00:19:37.448 Deallocated Read Value: Unknown 00:19:37.448 Deallocate in Write Zeroes: Not Supported 00:19:37.448 Deallocated Guard Field: 0xFFFF 00:19:37.448 Flush: Supported 00:19:37.448 Reservation: Supported 00:19:37.448 Namespace Sharing Capabilities: Multiple Controllers 00:19:37.448 Size (in LBAs): 131072 (0GiB) 00:19:37.448 Capacity (in LBAs): 131072 (0GiB) 00:19:37.448 Utilization (in LBAs): 131072 (0GiB) 00:19:37.448 NGUID: 247DF23EDD18410CADC941829707AB5C 00:19:37.448 UUID: 247df23e-dd18-410c-adc9-41829707ab5c 00:19:37.448 Thin Provisioning: Not Supported 00:19:37.448 Per-NS Atomic Units: Yes 00:19:37.448 Atomic Boundary Size (Normal): 0 00:19:37.448 Atomic Boundary Size (PFail): 0 00:19:37.448 Atomic Boundary Offset: 0 00:19:37.448 Maximum Single Source Range Length: 65535 00:19:37.448 Maximum Copy Length: 65535 00:19:37.448 Maximum Source Range Count: 1 00:19:37.448 NGUID/EUI64 Never Reused: No 00:19:37.448 Namespace Write Protected: No 00:19:37.448 Number of LBA Formats: 1 00:19:37.448 Current LBA Format: LBA Format #00 00:19:37.448 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:37.448 00:19:37.448 00:01:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:37.720 [2024-12-10 00:01:12.555007] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:43.112 Initializing NVMe Controllers 00:19:43.112 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:43.112 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:43.112 Initialization complete. Launching workers. 00:19:43.112 ======================================================== 00:19:43.112 Latency(us) 00:19:43.112 Device Information : IOPS MiB/s Average min max 00:19:43.112 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39955.66 156.08 3203.37 1007.94 6589.12 00:19:43.112 ======================================================== 00:19:43.112 Total : 39955.66 156.08 3203.37 1007.94 6589.12 00:19:43.112 00:19:43.112 [2024-12-10 00:01:17.575509] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:43.112 00:01:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:43.112 [2024-12-10 00:01:17.814623] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:48.610 Initializing NVMe Controllers 00:19:48.610 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:48.610 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:48.610 Initialization complete. Launching workers. 00:19:48.610 ======================================================== 00:19:48.610 Latency(us) 00:19:48.610 Device Information : IOPS MiB/s Average min max 00:19:48.610 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.25 62.71 7978.32 6949.33 8981.24 00:19:48.610 ======================================================== 00:19:48.610 Total : 16054.25 62.71 7978.32 6949.33 8981.24 00:19:48.610 00:19:48.610 [2024-12-10 00:01:22.855553] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:48.610 00:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:48.610 [2024-12-10 00:01:23.058540] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:54.063 [2024-12-10 00:01:28.149565] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:54.063 Initializing NVMe Controllers 00:19:54.063 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:54.063 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:54.063 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:54.063 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:54.063 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:54.063 Initialization complete. Launching workers. 00:19:54.063 Starting thread on core 2 00:19:54.063 Starting thread on core 3 00:19:54.063 Starting thread on core 1 00:19:54.063 00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:54.063 [2024-12-10 00:01:28.448563] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:56.700 [2024-12-10 00:01:31.512351] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:56.700 Initializing NVMe Controllers 00:19:56.700 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:56.700 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:56.700 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:56.700 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:56.700 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:56.700 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:56.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration run with configuration: 00:19:56.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:56.700 Initialization complete. Launching workers. 00:19:56.700 Starting thread on core 1 with urgent priority queue 00:19:56.700 Starting thread on core 2 with urgent priority queue 00:19:56.700 Starting thread on core 3 with urgent priority queue 00:19:56.700 Starting thread on core 0 with urgent priority queue 00:19:56.700 SPDK bdev Controller (SPDK1 ) core 0: 4883.00 IO/s 20.48 secs/100000 ios 00:19:56.700 SPDK bdev Controller (SPDK1 ) core 1: 5046.00 IO/s 19.82 secs/100000 ios 00:19:56.700 SPDK bdev Controller (SPDK1 ) core 2: 6668.67 IO/s 15.00 secs/100000 ios 00:19:56.700 SPDK bdev Controller (SPDK1 ) core 3: 6325.33 IO/s 15.81 secs/100000 ios 00:19:56.700 ======================================================== 00:19:56.700 00:19:56.700 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:56.981 [2024-12-10 00:01:31.800301] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:56.982 Initializing NVMe Controllers 00:19:56.982 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:56.982 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:56.982 Namespace ID: 1 size: 0GB 00:19:56.982 Initialization complete. 00:19:56.982 INFO: using host memory buffer for IO 00:19:56.982 Hello world! 00:19:56.982 [2024-12-10 00:01:31.835561] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:56.982 00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:57.265 [2024-12-10 00:01:32.121375] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:58.221 Initializing NVMe Controllers 00:19:58.221 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:58.221 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:58.221 Initialization complete. Launching workers. 00:19:58.221 submit (in ns) avg, min, max = 5700.1, 3232.2, 4000060.0 00:19:58.221 complete (in ns) avg, min, max = 21483.9, 1775.7, 6988917.4 00:19:58.221 00:19:58.221 Submit histogram 00:19:58.221 ================ 00:19:58.221 Range in us Cumulative Count 00:19:58.221 3.228 - 3.242: 0.0124% ( 2) 00:19:58.221 3.242 - 3.256: 0.0247% ( 2) 00:19:58.221 3.256 - 3.270: 0.0309% ( 1) 00:19:58.221 3.270 - 3.283: 0.0495% ( 3) 00:19:58.221 3.283 - 3.297: 0.1237% ( 12) 00:19:58.221 3.297 - 3.311: 1.1809% ( 171) 00:19:58.221 3.311 - 3.325: 4.2908% ( 503) 00:19:58.221 3.325 - 3.339: 9.0763% ( 774) 00:19:58.221 3.339 - 3.353: 14.6408% ( 900) 00:19:58.221 3.353 - 3.367: 20.4897% ( 946) 00:19:58.221 3.367 - 3.381: 26.6292% ( 993) 00:19:58.221 3.381 - 3.395: 32.1380% ( 891) 00:19:58.221 3.395 - 3.409: 37.7458% ( 907) 00:19:58.221 3.409 - 3.423: 42.5621% ( 779) 00:19:58.221 3.423 - 3.437: 47.1807% ( 747) 00:19:58.221 3.437 - 3.450: 51.5642% ( 709) 00:19:58.221 3.450 - 3.464: 57.5492% ( 968) 00:19:58.221 3.464 - 3.478: 64.0349% ( 1049) 00:19:58.221 3.478 - 3.492: 68.4926% ( 721) 00:19:58.221 3.492 - 3.506: 73.4327% ( 799) 00:19:58.221 3.506 - 3.520: 78.0697% ( 750) 00:19:58.221 3.520 - 3.534: 81.7238% ( 591) 00:19:58.221 3.534 - 3.548: 84.4504% ( 441) 00:19:58.221 3.548 - 3.562: 85.6684% ( 197) 00:19:58.221 3.562 - 3.590: 87.0224% ( 219) 00:19:58.221 3.590 - 3.617: 88.0302% ( 163) 00:19:58.221 3.617 - 3.645: 89.7861% ( 284) 00:19:58.221 3.645 - 3.673: 91.4802% ( 274) 00:19:58.221 3.673 - 3.701: 93.1433% ( 269) 00:19:58.221 3.701 - 3.729: 94.9178% ( 287) 00:19:58.221 3.729 - 3.757: 96.6118% ( 274) 00:19:58.221 3.757 - 3.784: 97.8113% ( 194) 00:19:58.221 3.784 - 3.812: 98.6583% ( 137) 00:19:58.221 3.812 - 3.840: 99.1468% ( 79) 00:19:58.221 3.840 - 3.868: 99.4250% ( 45) 00:19:58.221 3.868 - 3.896: 99.5425% ( 19) 00:19:58.221 3.896 - 3.923: 99.5734% ( 5) 00:19:58.221 3.923 - 3.951: 99.5919% ( 3) 00:19:58.221 3.951 - 3.979: 99.6105% ( 3) 00:19:58.221 4.230 - 4.257: 99.6167% ( 1) 00:19:58.221 5.176 - 5.203: 99.6229% ( 1) 00:19:58.221 5.343 - 5.370: 99.6290% ( 1) 00:19:58.221 5.398 - 5.426: 99.6352% ( 1) 00:19:58.221 5.482 - 5.510: 99.6414% ( 1) 00:19:58.221 5.593 - 5.621: 99.6538% ( 2) 00:19:58.221 5.677 - 5.704: 99.6599% ( 1) 00:19:58.221 5.816 - 5.843: 99.6723% ( 2) 00:19:58.221 5.843 - 5.871: 99.6847% ( 2) 00:19:58.221 5.927 - 5.955: 99.6909% ( 1) 00:19:58.221 5.955 - 5.983: 99.7032% ( 2) 00:19:58.221 6.010 - 6.038: 99.7094% ( 1) 00:19:58.221 6.094 - 6.122: 99.7156% ( 1) 00:19:58.221 6.233 - 6.261: 99.7218% ( 1) 00:19:58.221 6.483 - 6.511: 99.7280% ( 1) 00:19:58.221 6.595 - 6.623: 99.7341% ( 1) 00:19:58.221 6.650 - 6.678: 99.7403% ( 1) 00:19:58.221 6.762 - 6.790: 99.7527% ( 2) 00:19:58.221 6.845 - 6.873: 99.7589% ( 1) 00:19:58.221 6.873 - 6.901: 99.7651% ( 1) 00:19:58.221 6.929 - 6.957: 99.7712% ( 1) 00:19:58.221 6.957 - 6.984: 99.7774% ( 1) 00:19:58.221 7.012 - 7.040: 99.7836% ( 1) 00:19:58.221 7.123 - 7.179: 99.7898% ( 1) 00:19:58.221 7.290 - 7.346: 99.7960% ( 1) 00:19:58.221 7.402 - 7.457: 99.8022% ( 1) 00:19:58.221 7.457 - 7.513: 99.8083% ( 1) 00:19:58.221 7.513 - 7.569: 99.8145% ( 1) 00:19:58.221 7.569 - 7.624: 99.8207% ( 1) 00:19:58.221 7.680 - 7.736: 99.8269% ( 1) 00:19:58.221 7.903 - 7.958: 99.8331% ( 1) 00:19:58.221 7.958 - 8.014: 99.8392% ( 1) 00:19:58.221 8.014 - 8.070: 99.8454% ( 1) 00:19:58.221 8.181 - 8.237: 99.8516% ( 1) 00:19:58.221 8.292 - 8.348: 99.8578% ( 1) 00:19:58.221 8.348 - 8.403: 99.8702% ( 2) 00:19:58.221 8.459 - 8.515: 99.8825% ( 2) 00:19:58.221 8.626 - 8.682: 99.8949% ( 2) 00:19:58.221 8.960 - 9.016: 99.9011% ( 1) 00:19:58.221 9.517 - 9.572: 99.9073% ( 1) 00:19:58.221 [2024-12-10 00:01:33.142502] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:58.480 9.795 - 9.850: 99.9134% ( 1) 00:19:58.481 13.913 - 13.969: 99.9258% ( 2) 00:19:58.481 14.080 - 14.136: 99.9320% ( 1) 00:19:58.481 16.139 - 16.250: 99.9382% ( 1) 00:19:58.481 19.144 - 19.256: 99.9444% ( 1) 00:19:58.481 3989.148 - 4017.642: 100.0000% ( 9) 00:19:58.481 00:19:58.481 Complete histogram 00:19:58.481 ================== 00:19:58.481 Range in us Cumulative Count 00:19:58.481 1.774 - 1.781: 0.0062% ( 1) 00:19:58.481 1.809 - 1.823: 0.3586% ( 57) 00:19:58.481 1.823 - 1.837: 3.4685% ( 503) 00:19:58.481 1.837 - 1.850: 6.2879% ( 456) 00:19:58.481 1.850 - 1.864: 7.9943% ( 276) 00:19:58.481 1.864 - 1.878: 12.9034% ( 794) 00:19:58.481 1.878 - 1.892: 53.6169% ( 6585) 00:19:58.481 1.892 - 1.906: 86.3423% ( 5293) 00:19:58.481 1.906 - 1.920: 94.5530% ( 1328) 00:19:58.481 1.920 - 1.934: 96.3954% ( 298) 00:19:58.481 1.934 - 1.948: 96.8777% ( 78) 00:19:58.481 1.948 - 1.962: 97.7989% ( 149) 00:19:58.481 1.962 - 1.976: 98.7016% ( 146) 00:19:58.481 1.976 - 1.990: 99.0973% ( 64) 00:19:58.481 1.990 - 2.003: 99.2024% ( 17) 00:19:58.481 2.003 - 2.017: 99.2333% ( 5) 00:19:58.481 2.017 - 2.031: 99.2395% ( 1) 00:19:58.481 2.031 - 2.045: 99.2457% ( 1) 00:19:58.481 2.045 - 2.059: 99.2643% ( 3) 00:19:58.481 2.059 - 2.073: 99.2828% ( 3) 00:19:58.481 2.087 - 2.101: 99.2890% ( 1) 00:19:58.481 2.115 - 2.129: 99.2952% ( 1) 00:19:58.481 2.157 - 2.170: 99.3137% ( 3) 00:19:58.481 2.170 - 2.184: 99.3199% ( 1) 00:19:58.481 2.282 - 2.296: 99.3261% ( 1) 00:19:58.481 2.296 - 2.310: 99.3323% ( 1) 00:19:58.481 2.379 - 2.393: 99.3384% ( 1) 00:19:58.481 3.923 - 3.951: 99.3446% ( 1) 00:19:58.481 3.951 - 3.979: 99.3508% ( 1) 00:19:58.481 4.397 - 4.424: 99.3570% ( 1) 00:19:58.481 4.536 - 4.563: 99.3632% ( 1) 00:19:58.481 4.730 - 4.758: 99.3694% ( 1) 00:19:58.481 5.009 - 5.037: 99.3755% ( 1) 00:19:58.481 5.092 - 5.120: 99.3817% ( 1) 00:19:58.481 5.426 - 5.454: 99.3879% ( 1) 00:19:58.481 5.482 - 5.510: 99.3941% ( 1) 00:19:58.481 5.510 - 5.537: 99.4003% ( 1) 00:19:58.481 5.649 - 5.677: 99.4065% ( 1) 00:19:58.481 5.732 - 5.760: 99.4126% ( 1) 00:19:58.481 5.816 - 5.843: 99.4188% ( 1) 00:19:58.481 5.955 - 5.983: 99.4312% ( 2) 00:19:58.481 6.066 - 6.094: 99.4374% ( 1) 00:19:58.481 6.289 - 6.317: 99.4436% ( 1) 00:19:58.481 6.344 - 6.372: 99.4497% ( 1) 00:19:58.481 6.456 - 6.483: 99.4559% ( 1) 00:19:58.481 6.845 - 6.873: 99.4621% ( 1) 00:19:58.481 6.984 - 7.012: 99.4683% ( 1) 00:19:58.481 7.624 - 7.680: 99.4745% ( 1) 00:19:58.481 7.791 - 7.847: 99.4806% ( 1) 00:19:58.481 8.237 - 8.292: 99.4868% ( 1) 00:19:58.481 9.850 - 9.906: 99.4930% ( 1) 00:19:58.481 13.690 - 13.746: 99.4992% ( 1) 00:19:58.481 17.475 - 17.586: 99.5054% ( 1) 00:19:58.481 1025.781 - 1032.904: 99.5116% ( 1) 00:19:58.481 1154.003 - 1161.127: 99.5177% ( 1) 00:19:58.481 2008.821 - 2023.068: 99.5239% ( 1) 00:19:58.481 3989.148 - 4017.642: 99.9876% ( 75) 00:19:58.481 5983.722 - 6012.216: 99.9938% ( 1) 00:19:58.481 6981.009 - 7009.503: 100.0000% ( 1) 00:19:58.481 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:58.481 [ 00:19:58.481 { 00:19:58.481 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:58.481 "subtype": "Discovery", 00:19:58.481 "listen_addresses": [], 00:19:58.481 "allow_any_host": true, 00:19:58.481 "hosts": [] 00:19:58.481 }, 00:19:58.481 { 00:19:58.481 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:58.481 "subtype": "NVMe", 00:19:58.481 "listen_addresses": [ 00:19:58.481 { 00:19:58.481 "trtype": "VFIOUSER", 00:19:58.481 "adrfam": "IPv4", 00:19:58.481 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:58.481 "trsvcid": "0" 00:19:58.481 } 00:19:58.481 ], 00:19:58.481 "allow_any_host": true, 00:19:58.481 "hosts": [], 00:19:58.481 "serial_number": "SPDK1", 00:19:58.481 "model_number": "SPDK bdev Controller", 00:19:58.481 "max_namespaces": 32, 00:19:58.481 "min_cntlid": 1, 00:19:58.481 "max_cntlid": 65519, 00:19:58.481 "namespaces": [ 00:19:58.481 { 00:19:58.481 "nsid": 1, 00:19:58.481 "bdev_name": "Malloc1", 00:19:58.481 "name": "Malloc1", 00:19:58.481 "nguid": "247DF23EDD18410CADC941829707AB5C", 00:19:58.481 "uuid": "247df23e-dd18-410c-adc9-41829707ab5c" 00:19:58.481 } 00:19:58.481 ] 00:19:58.481 }, 00:19:58.481 { 00:19:58.481 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:58.481 "subtype": "NVMe", 00:19:58.481 "listen_addresses": [ 00:19:58.481 { 00:19:58.481 "trtype": "VFIOUSER", 00:19:58.481 "adrfam": "IPv4", 00:19:58.481 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:58.481 "trsvcid": "0" 00:19:58.481 } 00:19:58.481 ], 00:19:58.481 "allow_any_host": true, 00:19:58.481 "hosts": [], 00:19:58.481 "serial_number": "SPDK2", 00:19:58.481 "model_number": "SPDK bdev Controller", 00:19:58.481 "max_namespaces": 32, 00:19:58.481 "min_cntlid": 1, 00:19:58.481 "max_cntlid": 65519, 00:19:58.481 "namespaces": [ 00:19:58.481 { 00:19:58.481 "nsid": 1, 00:19:58.481 "bdev_name": "Malloc2", 00:19:58.481 "name": "Malloc2", 00:19:58.481 "nguid": "646FEAA15E294EEA8D97321C3D7A4BA5", 00:19:58.481 "uuid": "646feaa1-5e29-4eea-8d97-321c3d7a4ba5" 00:19:58.481 } 00:19:58.481 ] 00:19:58.481 } 00:19:58.481 ] 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=335549 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:19:58.481 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:58.741 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:58.741 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:58.741 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:19:58.742 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:58.742 [2024-12-10 00:01:33.552919] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:58.742 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:58.742 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:58.742 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:58.742 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:58.742 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:59.001 Malloc3 00:19:59.001 00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:59.260 [2024-12-10 00:01:34.002391] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:59.260 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:59.260 Asynchronous Event Request test 00:19:59.260 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:59.260 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:59.260 Registering asynchronous event callbacks... 00:19:59.260 Starting namespace attribute notice tests for all controllers... 00:19:59.260 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:59.260 aer_cb - Changed Namespace 00:19:59.260 Cleaning up... 00:19:59.521 [ 00:19:59.521 { 00:19:59.521 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:59.521 "subtype": "Discovery", 00:19:59.521 "listen_addresses": [], 00:19:59.521 "allow_any_host": true, 00:19:59.521 "hosts": [] 00:19:59.521 }, 00:19:59.521 { 00:19:59.521 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:59.521 "subtype": "NVMe", 00:19:59.521 "listen_addresses": [ 00:19:59.521 { 00:19:59.521 "trtype": "VFIOUSER", 00:19:59.521 "adrfam": "IPv4", 00:19:59.521 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:59.521 "trsvcid": "0" 00:19:59.521 } 00:19:59.521 ], 00:19:59.521 "allow_any_host": true, 00:19:59.521 "hosts": [], 00:19:59.521 "serial_number": "SPDK1", 00:19:59.521 "model_number": "SPDK bdev Controller", 00:19:59.521 "max_namespaces": 32, 00:19:59.521 "min_cntlid": 1, 00:19:59.521 "max_cntlid": 65519, 00:19:59.521 "namespaces": [ 00:19:59.521 { 00:19:59.521 "nsid": 1, 00:19:59.521 "bdev_name": "Malloc1", 00:19:59.521 "name": "Malloc1", 00:19:59.521 "nguid": "247DF23EDD18410CADC941829707AB5C", 00:19:59.521 "uuid": "247df23e-dd18-410c-adc9-41829707ab5c" 00:19:59.521 }, 00:19:59.521 { 00:19:59.521 "nsid": 2, 00:19:59.521 "bdev_name": "Malloc3", 00:19:59.521 "name": "Malloc3", 00:19:59.521 "nguid": "CD376F27C81045169A736D353F0DF7A2", 00:19:59.521 "uuid": "cd376f27-c810-4516-9a73-6d353f0df7a2" 00:19:59.521 } 00:19:59.521 ] 00:19:59.521 }, 00:19:59.521 { 00:19:59.521 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:59.521 "subtype": "NVMe", 00:19:59.521 "listen_addresses": [ 00:19:59.521 { 00:19:59.521 "trtype": "VFIOUSER", 00:19:59.521 "adrfam": "IPv4", 00:19:59.521 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:59.521 "trsvcid": "0" 00:19:59.521 } 00:19:59.521 ], 00:19:59.521 "allow_any_host": true, 00:19:59.521 "hosts": [], 00:19:59.521 "serial_number": "SPDK2", 00:19:59.521 "model_number": "SPDK bdev Controller", 00:19:59.521 "max_namespaces": 32, 00:19:59.521 "min_cntlid": 1, 00:19:59.521 "max_cntlid": 65519, 00:19:59.521 "namespaces": [ 00:19:59.521 { 00:19:59.521 "nsid": 1, 00:19:59.521 "bdev_name": "Malloc2", 00:19:59.521 "name": "Malloc2", 00:19:59.521 "nguid": "646FEAA15E294EEA8D97321C3D7A4BA5", 00:19:59.521 "uuid": "646feaa1-5e29-4eea-8d97-321c3d7a4ba5" 00:19:59.521 } 00:19:59.521 ] 00:19:59.521 } 00:19:59.521 ] 00:19:59.521 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 335549 00:19:59.521 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:59.521 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:59.521 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:59.521 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:59.521 [2024-12-10 00:01:34.251654] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:19:59.521 [2024-12-10 00:01:34.251701] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335567 ] 00:19:59.521 [2024-12-10 00:01:34.290059] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:59.521 [2024-12-10 00:01:34.298320] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:59.521 [2024-12-10 00:01:34.298346] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd8992b0000 00:19:59.521 [2024-12-10 00:01:34.299320] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:59.521 [2024-12-10 00:01:34.300324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:59.521 [2024-12-10 00:01:34.301335] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:59.521 [2024-12-10 00:01:34.302344] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:59.521 [2024-12-10 00:01:34.303355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:59.521 [2024-12-10 00:01:34.304366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:59.521 [2024-12-10 00:01:34.305376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:59.521 [2024-12-10 00:01:34.306383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:59.522 [2024-12-10 00:01:34.307391] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:59.522 [2024-12-10 00:01:34.307405] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd8992a5000 00:19:59.522 [2024-12-10 00:01:34.308348] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:59.522 [2024-12-10 00:01:34.317868] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:59.522 [2024-12-10 00:01:34.317893] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:19:59.522 [2024-12-10 00:01:34.322977] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:59.522 [2024-12-10 00:01:34.323015] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:59.522 [2024-12-10 00:01:34.323088] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:19:59.522 [2024-12-10 00:01:34.323102] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:19:59.522 [2024-12-10 00:01:34.323107] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:19:59.522 [2024-12-10 00:01:34.323987] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:59.522 [2024-12-10 00:01:34.323998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:19:59.522 [2024-12-10 00:01:34.324005] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:19:59.522 [2024-12-10 00:01:34.324990] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:59.522 [2024-12-10 00:01:34.325000] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:19:59.522 [2024-12-10 00:01:34.325006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:59.522 [2024-12-10 00:01:34.325994] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:59.522 [2024-12-10 00:01:34.326005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:59.522 [2024-12-10 00:01:34.326996] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:59.522 [2024-12-10 00:01:34.327006] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:59.522 [2024-12-10 00:01:34.327013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:59.522 [2024-12-10 00:01:34.327020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:59.522 [2024-12-10 00:01:34.327127] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:19:59.522 [2024-12-10 00:01:34.327132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:59.522 [2024-12-10 00:01:34.327137] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:59.522 [2024-12-10 00:01:34.328012] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:59.522 [2024-12-10 00:01:34.329012] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:59.522 [2024-12-10 00:01:34.330023] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:59.522 [2024-12-10 00:01:34.331023] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:59.522 [2024-12-10 00:01:34.331062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:59.522 [2024-12-10 00:01:34.332032] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:59.522 [2024-12-10 00:01:34.332041] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:59.522 [2024-12-10 00:01:34.332045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.332063] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:19:59.522 [2024-12-10 00:01:34.332070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.332083] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:59.522 [2024-12-10 00:01:34.332089] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:59.522 [2024-12-10 00:01:34.332092] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:59.522 [2024-12-10 00:01:34.332103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:59.522 [2024-12-10 00:01:34.340168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:59.522 [2024-12-10 00:01:34.340181] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:19:59.522 [2024-12-10 00:01:34.340186] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:19:59.522 [2024-12-10 00:01:34.340190] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:19:59.522 [2024-12-10 00:01:34.340194] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:59.522 [2024-12-10 00:01:34.340198] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:19:59.522 [2024-12-10 00:01:34.340205] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:19:59.522 [2024-12-10 00:01:34.340210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.340217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.340228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:59.522 [2024-12-10 00:01:34.348163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:59.522 [2024-12-10 00:01:34.348176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.522 [2024-12-10 00:01:34.348184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.522 [2024-12-10 00:01:34.348192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.522 [2024-12-10 00:01:34.348201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.522 [2024-12-10 00:01:34.348206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.348215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.348224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:59.522 [2024-12-10 00:01:34.356164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:59.522 [2024-12-10 00:01:34.356174] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:19:59.522 [2024-12-10 00:01:34.356178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.356189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.356194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.356202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:59.522 [2024-12-10 00:01:34.364164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:59.522 [2024-12-10 00:01:34.364222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.364230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.364237] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:59.522 [2024-12-10 00:01:34.364242] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:59.522 [2024-12-10 00:01:34.364245] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:59.522 [2024-12-10 00:01:34.364251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:59.522 [2024-12-10 00:01:34.372167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:59.522 [2024-12-10 00:01:34.372181] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:19:59.522 [2024-12-10 00:01:34.372192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.372199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:59.522 [2024-12-10 00:01:34.372205] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:59.522 [2024-12-10 00:01:34.372209] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:59.522 [2024-12-10 00:01:34.372213] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:59.522 [2024-12-10 00:01:34.372218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:59.522 [2024-12-10 00:01:34.380167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:59.522 [2024-12-10 00:01:34.380179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:59.523 [2024-12-10 00:01:34.380186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:59.523 [2024-12-10 00:01:34.380192] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:59.523 [2024-12-10 00:01:34.380197] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:59.523 [2024-12-10 00:01:34.380200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:59.523 [2024-12-10 00:01:34.380205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:59.523 [2024-12-10 00:01:34.388166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:59.523 [2024-12-10 00:01:34.388179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:59.523 [2024-12-10 00:01:34.388187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:59.523 [2024-12-10 00:01:34.388193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:19:59.523 [2024-12-10 00:01:34.388199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:59.523 [2024-12-10 00:01:34.388203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:59.523 [2024-12-10 00:01:34.388208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:19:59.523 [2024-12-10 00:01:34.388213] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:59.523 [2024-12-10 00:01:34.388217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:19:59.523 [2024-12-10 00:01:34.388222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:19:59.523 [2024-12-10 00:01:34.388241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:59.523 [2024-12-10 00:01:34.396169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:59.523 [2024-12-10 00:01:34.396183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:59.523 [2024-12-10 00:01:34.404164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:59.523 [2024-12-10 00:01:34.404176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:59.523 [2024-12-10 00:01:34.412164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:59.523 [2024-12-10 00:01:34.412177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:59.523 [2024-12-10 00:01:34.420165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:59.523 [2024-12-10 00:01:34.420180] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:59.523 [2024-12-10 00:01:34.420185] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:59.523 [2024-12-10 00:01:34.420188] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:59.523 [2024-12-10 00:01:34.420191] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:59.523 [2024-12-10 00:01:34.420194] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:59.523 [2024-12-10 00:01:34.420200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:59.523 [2024-12-10 00:01:34.420207] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:59.523 [2024-12-10 00:01:34.420211] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:59.523 [2024-12-10 00:01:34.420214] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:59.523 [2024-12-10 00:01:34.420219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:59.523 [2024-12-10 00:01:34.420225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:59.523 [2024-12-10 00:01:34.420229] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:59.523 [2024-12-10 00:01:34.420232] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:59.523 [2024-12-10 00:01:34.420238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:59.523 [2024-12-10 00:01:34.420245] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:59.523 [2024-12-10 00:01:34.420249] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:59.523 [2024-12-10 00:01:34.420252] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:59.523 [2024-12-10 00:01:34.420257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:59.523 [2024-12-10 00:01:34.428165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:59.523 [2024-12-10 00:01:34.428180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:59.523 [2024-12-10 00:01:34.428190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:59.523 [2024-12-10 00:01:34.428198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:59.523 ===================================================== 00:19:59.523 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:59.523 ===================================================== 00:19:59.523 Controller Capabilities/Features 00:19:59.523 ================================ 00:19:59.523 Vendor ID: 4e58 00:19:59.523 Subsystem Vendor ID: 4e58 00:19:59.523 Serial Number: SPDK2 00:19:59.523 Model Number: SPDK bdev Controller 00:19:59.523 Firmware Version: 25.01 00:19:59.523 Recommended Arb Burst: 6 00:19:59.523 IEEE OUI Identifier: 8d 6b 50 00:19:59.523 Multi-path I/O 00:19:59.523 May have multiple subsystem ports: Yes 00:19:59.523 May have multiple controllers: Yes 00:19:59.523 Associated with SR-IOV VF: No 00:19:59.523 Max Data Transfer Size: 131072 00:19:59.523 Max Number of Namespaces: 32 00:19:59.523 Max Number of I/O Queues: 127 00:19:59.523 NVMe Specification Version (VS): 1.3 00:19:59.523 NVMe Specification Version (Identify): 1.3 00:19:59.523 Maximum Queue Entries: 256 00:19:59.523 Contiguous Queues Required: Yes 00:19:59.523 Arbitration Mechanisms Supported 00:19:59.523 Weighted Round Robin: Not Supported 00:19:59.523 Vendor Specific: Not Supported 00:19:59.523 Reset Timeout: 15000 ms 00:19:59.523 Doorbell Stride: 4 bytes 00:19:59.523 NVM Subsystem Reset: Not Supported 00:19:59.523 Command Sets Supported 00:19:59.523 NVM Command Set: Supported 00:19:59.523 Boot Partition: Not Supported 00:19:59.523 Memory Page Size Minimum: 4096 bytes 00:19:59.523 Memory Page Size Maximum: 4096 bytes 00:19:59.523 Persistent Memory Region: Not Supported 00:19:59.523 Optional Asynchronous Events Supported 00:19:59.523 Namespace Attribute Notices: Supported 00:19:59.523 Firmware Activation Notices: Not Supported 00:19:59.523 ANA Change Notices: Not Supported 00:19:59.523 PLE Aggregate Log Change Notices: Not Supported 00:19:59.523 LBA Status Info Alert Notices: Not Supported 00:19:59.523 EGE Aggregate Log Change Notices: Not Supported 00:19:59.523 Normal NVM Subsystem Shutdown event: Not Supported 00:19:59.523 Zone Descriptor Change Notices: Not Supported 00:19:59.523 Discovery Log Change Notices: Not Supported 00:19:59.523 Controller Attributes 00:19:59.523 128-bit Host Identifier: Supported 00:19:59.523 Non-Operational Permissive Mode: Not Supported 00:19:59.523 NVM Sets: Not Supported 00:19:59.523 Read Recovery Levels: Not Supported 00:19:59.523 Endurance Groups: Not Supported 00:19:59.523 Predictable Latency Mode: Not Supported 00:19:59.523 Traffic Based Keep ALive: Not Supported 00:19:59.523 Namespace Granularity: Not Supported 00:19:59.523 SQ Associations: Not Supported 00:19:59.523 UUID List: Not Supported 00:19:59.523 Multi-Domain Subsystem: Not Supported 00:19:59.523 Fixed Capacity Management: Not Supported 00:19:59.523 Variable Capacity Management: Not Supported 00:19:59.523 Delete Endurance Group: Not Supported 00:19:59.523 Delete NVM Set: Not Supported 00:19:59.523 Extended LBA Formats Supported: Not Supported 00:19:59.523 Flexible Data Placement Supported: Not Supported 00:19:59.523 00:19:59.523 Controller Memory Buffer Support 00:19:59.523 ================================ 00:19:59.523 Supported: No 00:19:59.523 00:19:59.523 Persistent Memory Region Support 00:19:59.523 ================================ 00:19:59.523 Supported: No 00:19:59.523 00:19:59.523 Admin Command Set Attributes 00:19:59.523 ============================ 00:19:59.523 Security Send/Receive: Not Supported 00:19:59.523 Format NVM: Not Supported 00:19:59.523 Firmware Activate/Download: Not Supported 00:19:59.523 Namespace Management: Not Supported 00:19:59.523 Device Self-Test: Not Supported 00:19:59.523 Directives: Not Supported 00:19:59.523 NVMe-MI: Not Supported 00:19:59.523 Virtualization Management: Not Supported 00:19:59.523 Doorbell Buffer Config: Not Supported 00:19:59.523 Get LBA Status Capability: Not Supported 00:19:59.523 Command & Feature Lockdown Capability: Not Supported 00:19:59.523 Abort Command Limit: 4 00:19:59.523 Async Event Request Limit: 4 00:19:59.523 Number of Firmware Slots: N/A 00:19:59.523 Firmware Slot 1 Read-Only: N/A 00:19:59.523 Firmware Activation Without Reset: N/A 00:19:59.523 Multiple Update Detection Support: N/A 00:19:59.523 Firmware Update Granularity: No Information Provided 00:19:59.523 Per-Namespace SMART Log: No 00:19:59.523 Asymmetric Namespace Access Log Page: Not Supported 00:19:59.523 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:59.524 Command Effects Log Page: Supported 00:19:59.524 Get Log Page Extended Data: Supported 00:19:59.524 Telemetry Log Pages: Not Supported 00:19:59.524 Persistent Event Log Pages: Not Supported 00:19:59.524 Supported Log Pages Log Page: May Support 00:19:59.524 Commands Supported & Effects Log Page: Not Supported 00:19:59.524 Feature Identifiers & Effects Log Page:May Support 00:19:59.524 NVMe-MI Commands & Effects Log Page: May Support 00:19:59.524 Data Area 4 for Telemetry Log: Not Supported 00:19:59.524 Error Log Page Entries Supported: 128 00:19:59.524 Keep Alive: Supported 00:19:59.524 Keep Alive Granularity: 10000 ms 00:19:59.524 00:19:59.524 NVM Command Set Attributes 00:19:59.524 ========================== 00:19:59.524 Submission Queue Entry Size 00:19:59.524 Max: 64 00:19:59.524 Min: 64 00:19:59.524 Completion Queue Entry Size 00:19:59.524 Max: 16 00:19:59.524 Min: 16 00:19:59.524 Number of Namespaces: 32 00:19:59.524 Compare Command: Supported 00:19:59.524 Write Uncorrectable Command: Not Supported 00:19:59.524 Dataset Management Command: Supported 00:19:59.524 Write Zeroes Command: Supported 00:19:59.524 Set Features Save Field: Not Supported 00:19:59.524 Reservations: Not Supported 00:19:59.524 Timestamp: Not Supported 00:19:59.524 Copy: Supported 00:19:59.524 Volatile Write Cache: Present 00:19:59.524 Atomic Write Unit (Normal): 1 00:19:59.524 Atomic Write Unit (PFail): 1 00:19:59.524 Atomic Compare & Write Unit: 1 00:19:59.524 Fused Compare & Write: Supported 00:19:59.524 Scatter-Gather List 00:19:59.524 SGL Command Set: Supported (Dword aligned) 00:19:59.524 SGL Keyed: Not Supported 00:19:59.524 SGL Bit Bucket Descriptor: Not Supported 00:19:59.524 SGL Metadata Pointer: Not Supported 00:19:59.524 Oversized SGL: Not Supported 00:19:59.524 SGL Metadata Address: Not Supported 00:19:59.524 SGL Offset: Not Supported 00:19:59.524 Transport SGL Data Block: Not Supported 00:19:59.524 Replay Protected Memory Block: Not Supported 00:19:59.524 00:19:59.524 Firmware Slot Information 00:19:59.524 ========================= 00:19:59.524 Active slot: 1 00:19:59.524 Slot 1 Firmware Revision: 25.01 00:19:59.524 00:19:59.524 00:19:59.524 Commands Supported and Effects 00:19:59.524 ============================== 00:19:59.524 Admin Commands 00:19:59.524 -------------- 00:19:59.524 Get Log Page (02h): Supported 00:19:59.524 Identify (06h): Supported 00:19:59.524 Abort (08h): Supported 00:19:59.524 Set Features (09h): Supported 00:19:59.524 Get Features (0Ah): Supported 00:19:59.524 Asynchronous Event Request (0Ch): Supported 00:19:59.524 Keep Alive (18h): Supported 00:19:59.524 I/O Commands 00:19:59.524 ------------ 00:19:59.524 Flush (00h): Supported LBA-Change 00:19:59.524 Write (01h): Supported LBA-Change 00:19:59.524 Read (02h): Supported 00:19:59.524 Compare (05h): Supported 00:19:59.524 Write Zeroes (08h): Supported LBA-Change 00:19:59.524 Dataset Management (09h): Supported LBA-Change 00:19:59.524 Copy (19h): Supported LBA-Change 00:19:59.524 00:19:59.524 Error Log 00:19:59.524 ========= 00:19:59.524 00:19:59.524 Arbitration 00:19:59.524 =========== 00:19:59.524 Arbitration Burst: 1 00:19:59.524 00:19:59.524 Power Management 00:19:59.524 ================ 00:19:59.524 Number of Power States: 1 00:19:59.524 Current Power State: Power State #0 00:19:59.524 Power State #0: 00:19:59.524 Max Power: 0.00 W 00:19:59.524 Non-Operational State: Operational 00:19:59.524 Entry Latency: Not Reported 00:19:59.524 Exit Latency: Not Reported 00:19:59.524 Relative Read Throughput: 0 00:19:59.524 Relative Read Latency: 0 00:19:59.524 Relative Write Throughput: 0 00:19:59.524 Relative Write Latency: 0 00:19:59.524 Idle Power: Not Reported 00:19:59.524 Active Power: Not Reported 00:19:59.524 Non-Operational Permissive Mode: Not Supported 00:19:59.524 00:19:59.524 Health Information 00:19:59.524 ================== 00:19:59.524 Critical Warnings: 00:19:59.524 Available Spare Space: OK 00:19:59.524 Temperature: OK 00:19:59.524 Device Reliability: OK 00:19:59.524 Read Only: No 00:19:59.524 Volatile Memory Backup: OK 00:19:59.524 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:59.524 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:59.524 Available Spare: 0% 00:19:59.524 Available Sp[2024-12-10 00:01:34.428286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:59.524 [2024-12-10 00:01:34.436163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:59.524 [2024-12-10 00:01:34.436193] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:19:59.524 [2024-12-10 00:01:34.436202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.524 [2024-12-10 00:01:34.436208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.524 [2024-12-10 00:01:34.436213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.524 [2024-12-10 00:01:34.436219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.524 [2024-12-10 00:01:34.436267] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:59.524 [2024-12-10 00:01:34.436278] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:59.524 [2024-12-10 00:01:34.437269] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:59.524 [2024-12-10 00:01:34.437315] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:19:59.524 [2024-12-10 00:01:34.437322] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:19:59.524 [2024-12-10 00:01:34.438275] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:59.524 [2024-12-10 00:01:34.438287] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:19:59.524 [2024-12-10 00:01:34.438333] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:59.524 [2024-12-10 00:01:34.439318] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:59.784 are Threshold: 0% 00:19:59.784 Life Percentage Used: 0% 00:19:59.784 Data Units Read: 0 00:19:59.784 Data Units Written: 0 00:19:59.784 Host Read Commands: 0 00:19:59.784 Host Write Commands: 0 00:19:59.784 Controller Busy Time: 0 minutes 00:19:59.784 Power Cycles: 0 00:19:59.784 Power On Hours: 0 hours 00:19:59.784 Unsafe Shutdowns: 0 00:19:59.784 Unrecoverable Media Errors: 0 00:19:59.784 Lifetime Error Log Entries: 0 00:19:59.784 Warning Temperature Time: 0 minutes 00:19:59.784 Critical Temperature Time: 0 minutes 00:19:59.784 00:19:59.784 Number of Queues 00:19:59.784 ================ 00:19:59.784 Number of I/O Submission Queues: 127 00:19:59.784 Number of I/O Completion Queues: 127 00:19:59.784 00:19:59.784 Active Namespaces 00:19:59.784 ================= 00:19:59.784 Namespace ID:1 00:19:59.784 Error Recovery Timeout: Unlimited 00:19:59.784 Command Set Identifier: NVM (00h) 00:19:59.784 Deallocate: Supported 00:19:59.784 Deallocated/Unwritten Error: Not Supported 00:19:59.784 Deallocated Read Value: Unknown 00:19:59.784 Deallocate in Write Zeroes: Not Supported 00:19:59.784 Deallocated Guard Field: 0xFFFF 00:19:59.784 Flush: Supported 00:19:59.784 Reservation: Supported 00:19:59.784 Namespace Sharing Capabilities: Multiple Controllers 00:19:59.784 Size (in LBAs): 131072 (0GiB) 00:19:59.784 Capacity (in LBAs): 131072 (0GiB) 00:19:59.784 Utilization (in LBAs): 131072 (0GiB) 00:19:59.784 NGUID: 646FEAA15E294EEA8D97321C3D7A4BA5 00:19:59.784 UUID: 646feaa1-5e29-4eea-8d97-321c3d7a4ba5 00:19:59.784 Thin Provisioning: Not Supported 00:19:59.784 Per-NS Atomic Units: Yes 00:19:59.784 Atomic Boundary Size (Normal): 0 00:19:59.784 Atomic Boundary Size (PFail): 0 00:19:59.784 Atomic Boundary Offset: 0 00:19:59.784 Maximum Single Source Range Length: 65535 00:19:59.784 Maximum Copy Length: 65535 00:19:59.784 Maximum Source Range Count: 1 00:19:59.784 NGUID/EUI64 Never Reused: No 00:19:59.784 Namespace Write Protected: No 00:19:59.784 Number of LBA Formats: 1 00:19:59.784 Current LBA Format: LBA Format #00 00:19:59.784 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:59.784 00:19:59.784 00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:59.784 [2024-12-10 00:01:34.666590] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:05.059 Initializing NVMe Controllers 00:20:05.059 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:05.059 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:05.059 Initialization complete. Launching workers. 00:20:05.059 ======================================================== 00:20:05.059 Latency(us) 00:20:05.059 Device Information : IOPS MiB/s Average min max 00:20:05.059 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39941.80 156.02 3205.02 1002.27 9567.09 00:20:05.060 ======================================================== 00:20:05.060 Total : 39941.80 156.02 3205.02 1002.27 9567.09 00:20:05.060 00:20:05.060 [2024-12-10 00:01:39.767411] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:05.060 00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:05.319 [2024-12-10 00:01:39.998125] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:10.593 Initializing NVMe Controllers 00:20:10.593 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:10.593 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:10.593 Initialization complete. Launching workers. 00:20:10.593 ======================================================== 00:20:10.593 Latency(us) 00:20:10.593 Device Information : IOPS MiB/s Average min max 00:20:10.593 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39912.87 155.91 3206.59 977.66 9560.18 00:20:10.593 ======================================================== 00:20:10.593 Total : 39912.87 155.91 3206.59 977.66 9560.18 00:20:10.593 00:20:10.593 [2024-12-10 00:01:45.015597] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:10.593 00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:10.593 [2024-12-10 00:01:45.219742] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:15.867 [2024-12-10 00:01:50.355248] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:15.867 Initializing NVMe Controllers 00:20:15.867 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:15.867 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:15.867 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:15.867 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:15.867 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:15.867 Initialization complete. Launching workers. 00:20:15.867 Starting thread on core 2 00:20:15.867 Starting thread on core 3 00:20:15.867 Starting thread on core 1 00:20:15.867 00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:15.867 [2024-12-10 00:01:50.650559] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:19.159 [2024-12-10 00:01:53.725392] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:19.159 Initializing NVMe Controllers 00:20:19.159 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:19.159 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:19.159 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:19.159 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:19.159 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:19.159 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:19.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration run with configuration: 00:20:19.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:19.159 Initialization complete. Launching workers. 00:20:19.159 Starting thread on core 1 with urgent priority queue 00:20:19.159 Starting thread on core 2 with urgent priority queue 00:20:19.159 Starting thread on core 3 with urgent priority queue 00:20:19.159 Starting thread on core 0 with urgent priority queue 00:20:19.159 SPDK bdev Controller (SPDK2 ) core 0: 9360.67 IO/s 10.68 secs/100000 ios 00:20:19.159 SPDK bdev Controller (SPDK2 ) core 1: 9899.00 IO/s 10.10 secs/100000 ios 00:20:19.159 SPDK bdev Controller (SPDK2 ) core 2: 7673.33 IO/s 13.03 secs/100000 ios 00:20:19.159 SPDK bdev Controller (SPDK2 ) core 3: 10078.00 IO/s 9.92 secs/100000 ios 00:20:19.159 ======================================================== 00:20:19.159 00:20:19.159 00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:19.159 [2024-12-10 00:01:54.017617] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:19.159 Initializing NVMe Controllers 00:20:19.159 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:19.159 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:19.159 Namespace ID: 1 size: 0GB 00:20:19.159 Initialization complete. 00:20:19.159 INFO: using host memory buffer for IO 00:20:19.159 Hello world! 00:20:19.159 [2024-12-10 00:01:54.027685] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:19.159 00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:19.418 [2024-12-10 00:01:54.311443] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:20.796 Initializing NVMe Controllers 00:20:20.796 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:20.796 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:20.796 Initialization complete. Launching workers. 00:20:20.796 submit (in ns) avg, min, max = 6407.9, 3225.2, 3998821.7 00:20:20.796 complete (in ns) avg, min, max = 21161.2, 1768.7, 4992907.8 00:20:20.796 00:20:20.796 Submit histogram 00:20:20.796 ================ 00:20:20.796 Range in us Cumulative Count 00:20:20.796 3.214 - 3.228: 0.0061% ( 1) 00:20:20.796 3.228 - 3.242: 0.0184% ( 2) 00:20:20.796 3.242 - 3.256: 0.0307% ( 2) 00:20:20.796 3.256 - 3.270: 0.0613% ( 5) 00:20:20.796 3.270 - 3.283: 0.1104% ( 8) 00:20:20.796 3.283 - 3.297: 0.3863% ( 45) 00:20:20.796 3.297 - 3.311: 2.3915% ( 327) 00:20:20.796 3.311 - 3.325: 6.4263% ( 658) 00:20:20.796 3.325 - 3.339: 11.7672% ( 871) 00:20:20.796 3.339 - 3.353: 17.6968% ( 967) 00:20:20.796 3.353 - 3.367: 23.4547% ( 939) 00:20:20.796 3.367 - 3.381: 28.9490% ( 896) 00:20:20.796 3.381 - 3.395: 34.0446% ( 831) 00:20:20.796 3.395 - 3.409: 39.7535% ( 931) 00:20:20.796 3.409 - 3.423: 44.8492% ( 831) 00:20:20.796 3.423 - 3.437: 48.7491% ( 636) 00:20:20.796 3.437 - 3.450: 52.7226% ( 648) 00:20:20.796 3.450 - 3.464: 58.9956% ( 1023) 00:20:20.796 3.464 - 3.478: 65.3851% ( 1042) 00:20:20.796 3.478 - 3.492: 69.5855% ( 685) 00:20:20.796 3.492 - 3.506: 74.5769% ( 814) 00:20:20.796 3.506 - 3.520: 78.9858% ( 719) 00:20:20.796 3.520 - 3.534: 82.5362% ( 579) 00:20:20.796 3.534 - 3.548: 84.9092% ( 387) 00:20:20.796 3.548 - 3.562: 86.0498% ( 186) 00:20:20.796 3.562 - 3.590: 87.4540% ( 229) 00:20:20.796 3.590 - 3.617: 88.7724% ( 215) 00:20:20.796 3.617 - 3.645: 90.3667% ( 260) 00:20:20.796 3.645 - 3.673: 91.9365% ( 256) 00:20:20.796 3.673 - 3.701: 93.4265% ( 243) 00:20:20.796 3.701 - 3.729: 95.2845% ( 303) 00:20:20.796 3.729 - 3.757: 96.8482% ( 255) 00:20:20.796 3.757 - 3.784: 97.8599% ( 165) 00:20:20.796 3.784 - 3.812: 98.6387% ( 127) 00:20:20.796 3.812 - 3.840: 99.1477% ( 83) 00:20:20.796 3.840 - 3.868: 99.3991% ( 41) 00:20:20.797 3.868 - 3.896: 99.5340% ( 22) 00:20:20.797 3.896 - 3.923: 99.5708% ( 6) 00:20:20.797 3.923 - 3.951: 99.6014% ( 5) 00:20:20.797 3.979 - 4.007: 99.6076% ( 1) 00:20:20.797 4.146 - 4.174: 99.6198% ( 2) 00:20:20.797 5.370 - 5.398: 99.6260% ( 1) 00:20:20.797 5.482 - 5.510: 99.6321% ( 1) 00:20:20.797 5.537 - 5.565: 99.6443% ( 2) 00:20:20.797 5.704 - 5.732: 99.6505% ( 1) 00:20:20.797 5.871 - 5.899: 99.6566% ( 1) 00:20:20.797 6.122 - 6.150: 99.6627% ( 1) 00:20:20.797 6.289 - 6.317: 99.6689% ( 1) 00:20:20.797 6.511 - 6.539: 99.6750% ( 1) 00:20:20.797 6.678 - 6.706: 99.6811% ( 1) 00:20:20.797 6.706 - 6.734: 99.6873% ( 1) 00:20:20.797 6.762 - 6.790: 99.6934% ( 1) 00:20:20.797 6.790 - 6.817: 99.7118% ( 3) 00:20:20.797 6.817 - 6.845: 99.7179% ( 1) 00:20:20.797 6.873 - 6.901: 99.7302% ( 2) 00:20:20.797 6.929 - 6.957: 99.7425% ( 2) 00:20:20.797 6.957 - 6.984: 99.7609% ( 3) 00:20:20.797 6.984 - 7.012: 99.7670% ( 1) 00:20:20.797 7.012 - 7.040: 99.7731% ( 1) 00:20:20.797 7.040 - 7.068: 99.7792% ( 1) 00:20:20.797 7.068 - 7.096: 99.7915% ( 2) 00:20:20.797 7.096 - 7.123: 99.8038% ( 2) 00:20:20.797 7.235 - 7.290: 99.8222% ( 3) 00:20:20.797 7.346 - 7.402: 99.8467% ( 4) 00:20:20.797 7.402 - 7.457: 99.8528% ( 1) 00:20:20.797 7.513 - 7.569: 99.8651% ( 2) 00:20:20.797 7.569 - 7.624: 99.8712% ( 1) 00:20:20.797 7.680 - 7.736: 99.8774% ( 1) 00:20:20.797 7.736 - 7.791: 99.8835% ( 1) 00:20:20.797 7.791 - 7.847: 99.8896% ( 1) 00:20:20.797 8.014 - 8.070: 99.8958% ( 1) 00:20:20.797 8.125 - 8.181: 99.9019% ( 1) 00:20:20.797 8.348 - 8.403: 99.9142% ( 2) 00:20:20.797 9.350 - 9.405: 99.9203% ( 1) 00:20:20.797 11.576 - 11.631: 99.9264% ( 1) 00:20:20.797 3989.148 - 4017.642: 100.0000% ( 12) 00:20:20.797 00:20:20.797 Complete histogram 00:20:20.797 ================== 00:20:20.797 Range in us Cumulative Count 00:20:20.797 1.767 - 1.774: 0.0245% ( 4) 00:20:20.797 1.774 - [2024-12-10 00:01:55.406229] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:20.797 1.781: 0.1226% ( 16) 00:20:20.797 1.781 - 1.795: 0.3189% ( 32) 00:20:20.797 1.795 - 1.809: 0.3802% ( 10) 00:20:20.797 1.809 - 1.823: 4.1391% ( 613) 00:20:20.797 1.823 - 1.837: 45.2784% ( 6709) 00:20:20.797 1.837 - 1.850: 79.4641% ( 5575) 00:20:20.797 1.850 - 1.864: 86.2154% ( 1101) 00:20:20.797 1.864 - 1.878: 88.6375% ( 395) 00:20:20.797 1.878 - 1.892: 90.5690% ( 315) 00:20:20.797 1.892 - 1.906: 95.3581% ( 781) 00:20:20.797 1.906 - 1.920: 98.0684% ( 442) 00:20:20.797 1.920 - 1.934: 98.8472% ( 127) 00:20:20.797 1.934 - 1.948: 99.0066% ( 26) 00:20:20.797 1.948 - 1.962: 99.0802% ( 12) 00:20:20.797 1.962 - 1.976: 99.1538% ( 12) 00:20:20.797 1.976 - 1.990: 99.2151% ( 10) 00:20:20.797 1.990 - 2.003: 99.2274% ( 2) 00:20:20.797 2.003 - 2.017: 99.2703% ( 7) 00:20:20.797 2.017 - 2.031: 99.2764% ( 1) 00:20:20.797 2.045 - 2.059: 99.2826% ( 1) 00:20:20.797 2.101 - 2.115: 99.2887% ( 1) 00:20:20.797 2.115 - 2.129: 99.2948% ( 1) 00:20:20.797 2.254 - 2.268: 99.3010% ( 1) 00:20:20.797 2.268 - 2.282: 99.3071% ( 1) 00:20:20.797 2.296 - 2.310: 99.3132% ( 1) 00:20:20.797 2.323 - 2.337: 99.3194% ( 1) 00:20:20.797 3.840 - 3.868: 99.3255% ( 1) 00:20:20.797 3.868 - 3.896: 99.3316% ( 1) 00:20:20.797 3.896 - 3.923: 99.3377% ( 1) 00:20:20.797 4.007 - 4.035: 99.3500% ( 2) 00:20:20.797 4.118 - 4.146: 99.3623% ( 2) 00:20:20.797 4.563 - 4.591: 99.3684% ( 1) 00:20:20.797 4.870 - 4.897: 99.3807% ( 2) 00:20:20.797 4.925 - 4.953: 99.3868% ( 1) 00:20:20.797 5.037 - 5.064: 99.3929% ( 1) 00:20:20.797 5.231 - 5.259: 99.3991% ( 1) 00:20:20.797 5.426 - 5.454: 99.4052% ( 1) 00:20:20.797 5.537 - 5.565: 99.4113% ( 1) 00:20:20.797 5.649 - 5.677: 99.4236% ( 2) 00:20:20.797 5.732 - 5.760: 99.4359% ( 2) 00:20:20.797 5.843 - 5.871: 99.4420% ( 1) 00:20:20.797 5.927 - 5.955: 99.4481% ( 1) 00:20:20.797 6.066 - 6.094: 99.4543% ( 1) 00:20:20.797 6.205 - 6.233: 99.4604% ( 1) 00:20:20.797 6.261 - 6.289: 99.4665% ( 1) 00:20:20.797 6.317 - 6.344: 99.4727% ( 1) 00:20:20.797 6.344 - 6.372: 99.4788% ( 1) 00:20:20.797 6.400 - 6.428: 99.4849% ( 1) 00:20:20.797 6.595 - 6.623: 99.4910% ( 1) 00:20:20.797 6.790 - 6.817: 99.4972% ( 1) 00:20:20.797 6.873 - 6.901: 99.5033% ( 1) 00:20:20.797 7.123 - 7.179: 99.5094% ( 1) 00:20:20.797 14.247 - 14.358: 99.5156% ( 1) 00:20:20.797 3120.083 - 3134.330: 99.5217% ( 1) 00:20:20.797 3205.565 - 3219.812: 99.5278% ( 1) 00:20:20.797 3989.148 - 4017.642: 99.9939% ( 76) 00:20:20.797 4986.435 - 5014.929: 100.0000% ( 1) 00:20:20.797 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:20.797 [ 00:20:20.797 { 00:20:20.797 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:20.797 "subtype": "Discovery", 00:20:20.797 "listen_addresses": [], 00:20:20.797 "allow_any_host": true, 00:20:20.797 "hosts": [] 00:20:20.797 }, 00:20:20.797 { 00:20:20.797 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:20.797 "subtype": "NVMe", 00:20:20.797 "listen_addresses": [ 00:20:20.797 { 00:20:20.797 "trtype": "VFIOUSER", 00:20:20.797 "adrfam": "IPv4", 00:20:20.797 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:20.797 "trsvcid": "0" 00:20:20.797 } 00:20:20.797 ], 00:20:20.797 "allow_any_host": true, 00:20:20.797 "hosts": [], 00:20:20.797 "serial_number": "SPDK1", 00:20:20.797 "model_number": "SPDK bdev Controller", 00:20:20.797 "max_namespaces": 32, 00:20:20.797 "min_cntlid": 1, 00:20:20.797 "max_cntlid": 65519, 00:20:20.797 "namespaces": [ 00:20:20.797 { 00:20:20.797 "nsid": 1, 00:20:20.797 "bdev_name": "Malloc1", 00:20:20.797 "name": "Malloc1", 00:20:20.797 "nguid": "247DF23EDD18410CADC941829707AB5C", 00:20:20.797 "uuid": "247df23e-dd18-410c-adc9-41829707ab5c" 00:20:20.797 }, 00:20:20.797 { 00:20:20.797 "nsid": 2, 00:20:20.797 "bdev_name": "Malloc3", 00:20:20.797 "name": "Malloc3", 00:20:20.797 "nguid": "CD376F27C81045169A736D353F0DF7A2", 00:20:20.797 "uuid": "cd376f27-c810-4516-9a73-6d353f0df7a2" 00:20:20.797 } 00:20:20.797 ] 00:20:20.797 }, 00:20:20.797 { 00:20:20.797 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:20.797 "subtype": "NVMe", 00:20:20.797 "listen_addresses": [ 00:20:20.797 { 00:20:20.797 "trtype": "VFIOUSER", 00:20:20.797 "adrfam": "IPv4", 00:20:20.797 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:20.797 "trsvcid": "0" 00:20:20.797 } 00:20:20.797 ], 00:20:20.797 "allow_any_host": true, 00:20:20.797 "hosts": [], 00:20:20.797 "serial_number": "SPDK2", 00:20:20.797 "model_number": "SPDK bdev Controller", 00:20:20.797 "max_namespaces": 32, 00:20:20.797 "min_cntlid": 1, 00:20:20.797 "max_cntlid": 65519, 00:20:20.797 "namespaces": [ 00:20:20.797 { 00:20:20.797 "nsid": 1, 00:20:20.797 "bdev_name": "Malloc2", 00:20:20.797 "name": "Malloc2", 00:20:20.797 "nguid": "646FEAA15E294EEA8D97321C3D7A4BA5", 00:20:20.797 "uuid": "646feaa1-5e29-4eea-8d97-321c3d7a4ba5" 00:20:20.797 } 00:20:20.797 ] 00:20:20.797 } 00:20:20.797 ] 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=339218 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.797 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:20:20.798 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:20:20.798 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:21.087 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:21.087 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:20:21.087 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:20:21.087 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:21.087 [2024-12-10 00:01:55.820577] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:21.087 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:21.087 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:21.087 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:21.087 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:21.087 00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:21.346 Malloc4 00:20:21.346 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:21.346 [2024-12-10 00:01:56.262034] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:21.605 Asynchronous Event Request test 00:20:21.605 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:21.605 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:21.605 Registering asynchronous event callbacks... 00:20:21.605 Starting namespace attribute notice tests for all controllers... 00:20:21.605 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:21.605 aer_cb - Changed Namespace 00:20:21.605 Cleaning up... 00:20:21.605 [ 00:20:21.605 { 00:20:21.605 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:21.605 "subtype": "Discovery", 00:20:21.605 "listen_addresses": [], 00:20:21.605 "allow_any_host": true, 00:20:21.605 "hosts": [] 00:20:21.605 }, 00:20:21.605 { 00:20:21.605 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:21.605 "subtype": "NVMe", 00:20:21.605 "listen_addresses": [ 00:20:21.605 { 00:20:21.605 "trtype": "VFIOUSER", 00:20:21.605 "adrfam": "IPv4", 00:20:21.605 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:21.605 "trsvcid": "0" 00:20:21.605 } 00:20:21.605 ], 00:20:21.605 "allow_any_host": true, 00:20:21.605 "hosts": [], 00:20:21.605 "serial_number": "SPDK1", 00:20:21.605 "model_number": "SPDK bdev Controller", 00:20:21.605 "max_namespaces": 32, 00:20:21.605 "min_cntlid": 1, 00:20:21.605 "max_cntlid": 65519, 00:20:21.605 "namespaces": [ 00:20:21.605 { 00:20:21.605 "nsid": 1, 00:20:21.605 "bdev_name": "Malloc1", 00:20:21.605 "name": "Malloc1", 00:20:21.605 "nguid": "247DF23EDD18410CADC941829707AB5C", 00:20:21.605 "uuid": "247df23e-dd18-410c-adc9-41829707ab5c" 00:20:21.605 }, 00:20:21.605 { 00:20:21.605 "nsid": 2, 00:20:21.605 "bdev_name": "Malloc3", 00:20:21.605 "name": "Malloc3", 00:20:21.605 "nguid": "CD376F27C81045169A736D353F0DF7A2", 00:20:21.605 "uuid": "cd376f27-c810-4516-9a73-6d353f0df7a2" 00:20:21.605 } 00:20:21.605 ] 00:20:21.605 }, 00:20:21.605 { 00:20:21.605 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:21.605 "subtype": "NVMe", 00:20:21.605 "listen_addresses": [ 00:20:21.605 { 00:20:21.605 "trtype": "VFIOUSER", 00:20:21.605 "adrfam": "IPv4", 00:20:21.605 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:21.605 "trsvcid": "0" 00:20:21.605 } 00:20:21.605 ], 00:20:21.605 "allow_any_host": true, 00:20:21.605 "hosts": [], 00:20:21.605 "serial_number": "SPDK2", 00:20:21.605 "model_number": "SPDK bdev Controller", 00:20:21.605 "max_namespaces": 32, 00:20:21.605 "min_cntlid": 1, 00:20:21.605 "max_cntlid": 65519, 00:20:21.605 "namespaces": [ 00:20:21.605 { 00:20:21.605 "nsid": 1, 00:20:21.605 "bdev_name": "Malloc2", 00:20:21.605 "name": "Malloc2", 00:20:21.605 "nguid": "646FEAA15E294EEA8D97321C3D7A4BA5", 00:20:21.605 "uuid": "646feaa1-5e29-4eea-8d97-321c3d7a4ba5" 00:20:21.605 }, 00:20:21.605 { 00:20:21.605 "nsid": 2, 00:20:21.605 "bdev_name": "Malloc4", 00:20:21.605 "name": "Malloc4", 00:20:21.605 "nguid": "8E8C8683F9A3496FBBD989D59B5AF094", 00:20:21.605 "uuid": "8e8c8683-f9a3-496f-bbd9-89d59b5af094" 00:20:21.605 } 00:20:21.605 ] 00:20:21.605 } 00:20:21.605 ] 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 339218 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 331345 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 331345 ']' 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 331345 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 331345 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 331345' 00:20:21.605 killing process with pid 331345 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 331345 00:20:21.605 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 331345 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=339296 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 339296' 00:20:21.865 Process pid: 339296 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 339296 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 339296 ']' 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.865 00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:22.125 [2024-12-10 00:01:56.826931] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:22.125 [2024-12-10 00:01:56.827780] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:20:22.125 [2024-12-10 00:01:56.827814] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.125 [2024-12-10 00:01:56.900439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.125 [2024-12-10 00:01:56.939571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.125 [2024-12-10 00:01:56.939609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.125 [2024-12-10 00:01:56.939616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.125 [2024-12-10 00:01:56.939622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.125 [2024-12-10 00:01:56.939627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.125 [2024-12-10 00:01:56.941045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.125 [2024-12-10 00:01:56.941135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.125 [2024-12-10 00:01:56.941264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.125 [2024-12-10 00:01:56.941265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.125 [2024-12-10 00:01:57.010098] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:22.125 [2024-12-10 00:01:57.010339] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:22.126 [2024-12-10 00:01:57.010940] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:22.126 [2024-12-10 00:01:57.011025] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:22.126 [2024-12-10 00:01:57.011105] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:22.126 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.126 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:20:22.126 00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:23.505 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:23.505 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:23.505 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:23.505 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:23.505 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:23.505 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:23.765 Malloc1 00:20:23.765 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:23.765 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:24.024 00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:24.283 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:24.283 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:24.283 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:24.542 Malloc2 00:20:24.542 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:24.800 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:24.800 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 339296 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 339296 ']' 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 339296 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 339296 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 339296' 00:20:25.060 killing process with pid 339296 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 339296 00:20:25.060 00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 339296 00:20:25.319 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:25.319 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:25.319 00:20:25.319 real 0m51.328s 00:20:25.319 user 3m18.565s 00:20:25.319 sys 0m3.268s 00:20:25.319 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.319 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:25.319 ************************************ 00:20:25.319 END TEST nvmf_vfio_user 00:20:25.319 ************************************ 00:20:25.319 00:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:25.319 00:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:25.319 00:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.319 00:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:25.319 ************************************ 00:20:25.319 START TEST nvmf_vfio_user_nvme_compliance 00:20:25.319 ************************************ 00:20:25.319 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:25.580 * Looking for test storage... 00:20:25.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.580 --rc genhtml_branch_coverage=1 00:20:25.580 --rc genhtml_function_coverage=1 00:20:25.580 --rc genhtml_legend=1 00:20:25.580 --rc geninfo_all_blocks=1 00:20:25.580 --rc geninfo_unexecuted_blocks=1 00:20:25.580 00:20:25.580 ' 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.580 --rc genhtml_branch_coverage=1 00:20:25.580 --rc genhtml_function_coverage=1 00:20:25.580 --rc genhtml_legend=1 00:20:25.580 --rc geninfo_all_blocks=1 00:20:25.580 --rc geninfo_unexecuted_blocks=1 00:20:25.580 00:20:25.580 ' 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.580 --rc genhtml_branch_coverage=1 00:20:25.580 --rc genhtml_function_coverage=1 00:20:25.580 --rc genhtml_legend=1 00:20:25.580 --rc geninfo_all_blocks=1 00:20:25.580 --rc geninfo_unexecuted_blocks=1 00:20:25.580 00:20:25.580 ' 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.580 --rc genhtml_branch_coverage=1 00:20:25.580 --rc genhtml_function_coverage=1 00:20:25.580 --rc genhtml_legend=1 00:20:25.580 --rc geninfo_all_blocks=1 00:20:25.580 --rc geninfo_unexecuted_blocks=1 00:20:25.580 00:20:25.580 ' 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.580 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:25.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=340024 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 340024' 00:20:25.581 Process pid: 340024 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 340024 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 340024 ']' 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.581 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:25.581 [2024-12-10 00:02:00.507708] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:20:25.581 [2024-12-10 00:02:00.507755] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.841 [2024-12-10 00:02:00.581356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:25.841 [2024-12-10 00:02:00.621062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.841 [2024-12-10 00:02:00.621098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.841 [2024-12-10 00:02:00.621106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.841 [2024-12-10 00:02:00.621112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.841 [2024-12-10 00:02:00.621117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.841 [2024-12-10 00:02:00.622514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.841 [2024-12-10 00:02:00.622620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.841 [2024-12-10 00:02:00.622621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.841 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.841 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:20:25.841 00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:27.226 malloc0 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.226 00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:27.226 00:20:27.226 00:20:27.226 CUnit - A unit testing framework for C - Version 2.1-3 00:20:27.226 http://cunit.sourceforge.net/ 00:20:27.226 00:20:27.226 00:20:27.226 Suite: nvme_compliance 00:20:27.226 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 00:02:01.973590] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.226 [2024-12-10 00:02:01.974944] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:27.226 [2024-12-10 00:02:01.974959] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:27.226 [2024-12-10 00:02:01.974965] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:27.226 [2024-12-10 00:02:01.976613] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.226 passed 00:20:27.226 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 00:02:02.056174] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.226 [2024-12-10 00:02:02.059197] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.226 passed 00:20:27.226 Test: admin_identify_ns ...[2024-12-10 00:02:02.138585] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.485 [2024-12-10 00:02:02.202169] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:27.486 [2024-12-10 00:02:02.210177] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:27.486 [2024-12-10 00:02:02.231265] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.486 passed 00:20:27.486 Test: admin_get_features_mandatory_features ...[2024-12-10 00:02:02.306466] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.486 [2024-12-10 00:02:02.309485] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.486 passed 00:20:27.486 Test: admin_get_features_optional_features ...[2024-12-10 00:02:02.385990] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.486 [2024-12-10 00:02:02.389014] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.486 passed 00:20:27.745 Test: admin_set_features_number_of_queues ...[2024-12-10 00:02:02.467941] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.745 [2024-12-10 00:02:02.573267] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.745 passed 00:20:27.745 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 00:02:02.647478] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:27.745 [2024-12-10 00:02:02.650495] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:27.745 passed 00:20:28.004 Test: admin_get_log_page_with_lpo ...[2024-12-10 00:02:02.728414] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:28.004 [2024-12-10 00:02:02.797190] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:28.004 [2024-12-10 00:02:02.810238] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:28.004 passed 00:20:28.004 Test: fabric_property_get ...[2024-12-10 00:02:02.887312] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:28.004 [2024-12-10 00:02:02.888556] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:28.004 [2024-12-10 00:02:02.890329] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:28.004 passed 00:20:28.264 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 00:02:02.967820] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:28.264 [2024-12-10 00:02:02.969054] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:28.264 [2024-12-10 00:02:02.972857] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:28.264 passed 00:20:28.264 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 00:02:03.049622] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:28.264 [2024-12-10 00:02:03.133175] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:28.264 [2024-12-10 00:02:03.149161] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:28.264 [2024-12-10 00:02:03.154246] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:28.264 passed 00:20:28.523 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 00:02:03.232135] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:28.523 [2024-12-10 00:02:03.234385] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:28.523 [2024-12-10 00:02:03.236166] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:28.523 passed 00:20:28.523 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 00:02:03.313098] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:28.523 [2024-12-10 00:02:03.390165] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:28.523 [2024-12-10 00:02:03.414165] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:28.523 [2024-12-10 00:02:03.419251] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:28.523 passed 00:20:28.782 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 00:02:03.496304] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:28.782 [2024-12-10 00:02:03.497536] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:28.782 [2024-12-10 00:02:03.497561] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:28.782 [2024-12-10 00:02:03.499329] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:28.782 passed 00:20:28.782 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 00:02:03.575568] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:28.782 [2024-12-10 00:02:03.671176] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:28.782 [2024-12-10 00:02:03.679167] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:28.782 [2024-12-10 00:02:03.687169] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:28.782 [2024-12-10 00:02:03.695165] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:29.042 [2024-12-10 00:02:03.724279] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:29.042 passed 00:20:29.042 Test: admin_create_io_sq_verify_pc ...[2024-12-10 00:02:03.798464] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:29.042 [2024-12-10 00:02:03.817173] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:29.042 [2024-12-10 00:02:03.834654] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:29.042 passed 00:20:29.042 Test: admin_create_io_qp_max_qps ...[2024-12-10 00:02:03.911192] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:30.419 [2024-12-10 00:02:05.013168] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:20:30.678 [2024-12-10 00:02:05.396059] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:30.678 passed 00:20:30.678 Test: admin_create_io_sq_shared_cq ...[2024-12-10 00:02:05.473598] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:30.678 [2024-12-10 00:02:05.609167] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:30.938 [2024-12-10 00:02:05.646244] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:30.938 passed 00:20:30.938 00:20:30.938 Run Summary: Type Total Ran Passed Failed Inactive 00:20:30.938 suites 1 1 n/a 0 0 00:20:30.938 tests 18 18 18 0 0 00:20:30.938 asserts 360 360 360 0 n/a 00:20:30.938 00:20:30.938 Elapsed time = 1.509 seconds 00:20:30.938 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 340024 00:20:30.938 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 340024 ']' 00:20:30.938 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 340024 00:20:30.938 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:20:30.938 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.938 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 340024 00:20:30.938 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.938 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.938 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 340024' 00:20:30.938 killing process with pid 340024 00:20:30.938 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 340024 00:20:30.938 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 340024 00:20:31.196 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:31.196 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:31.196 00:20:31.196 real 0m5.674s 00:20:31.196 user 0m15.847s 00:20:31.196 sys 0m0.517s 00:20:31.196 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:31.196 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:31.196 ************************************ 00:20:31.196 END TEST nvmf_vfio_user_nvme_compliance 00:20:31.196 ************************************ 00:20:31.196 00:02:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:31.196 00:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:31.196 00:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.196 00:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:31.196 ************************************ 00:20:31.196 START TEST nvmf_vfio_user_fuzz 00:20:31.196 ************************************ 00:20:31.196 00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:31.196 * Looking for test storage... 00:20:31.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:20:31.196 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:31.196 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:20:31.196 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:31.456 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.457 --rc genhtml_branch_coverage=1 00:20:31.457 --rc genhtml_function_coverage=1 00:20:31.457 --rc genhtml_legend=1 00:20:31.457 --rc geninfo_all_blocks=1 00:20:31.457 --rc geninfo_unexecuted_blocks=1 00:20:31.457 00:20:31.457 ' 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.457 --rc genhtml_branch_coverage=1 00:20:31.457 --rc genhtml_function_coverage=1 00:20:31.457 --rc genhtml_legend=1 00:20:31.457 --rc geninfo_all_blocks=1 00:20:31.457 --rc geninfo_unexecuted_blocks=1 00:20:31.457 00:20:31.457 ' 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.457 --rc genhtml_branch_coverage=1 00:20:31.457 --rc genhtml_function_coverage=1 00:20:31.457 --rc genhtml_legend=1 00:20:31.457 --rc geninfo_all_blocks=1 00:20:31.457 --rc geninfo_unexecuted_blocks=1 00:20:31.457 00:20:31.457 ' 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:31.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.457 --rc genhtml_branch_coverage=1 00:20:31.457 --rc genhtml_function_coverage=1 00:20:31.457 --rc genhtml_legend=1 00:20:31.457 --rc geninfo_all_blocks=1 00:20:31.457 --rc geninfo_unexecuted_blocks=1 00:20:31.457 00:20:31.457 ' 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:31.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=341008 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 341008' 00:20:31.457 Process pid: 341008 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 341008 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 341008 ']' 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:31.457 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:31.717 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.717 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:20:31.717 00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:32.654 malloc0 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:32.654 00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:21:04.741 Fuzzing completed. Shutting down the fuzz application 00:21:04.741 00:21:04.741 Dumping successful admin opcodes: 00:21:04.741 9, 10, 00:21:04.741 Dumping successful io opcodes: 00:21:04.741 0, 00:21:04.741 NS: 0x20000081ef00 I/O qp, Total commands completed: 1058016, total successful commands: 4181, random_seed: 2731825728 00:21:04.741 NS: 0x20000081ef00 admin qp, Total commands completed: 236384, total successful commands: 55, random_seed: 203452096 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 341008 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 341008 ']' 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 341008 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 341008 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 341008' 00:21:04.741 killing process with pid 341008 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 341008 00:21:04.741 00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 341008 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:21:04.741 00:21:04.741 real 0m32.205s 00:21:04.741 user 0m34.804s 00:21:04.741 sys 0m26.218s 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:04.741 ************************************ 00:21:04.741 END TEST nvmf_vfio_user_fuzz 00:21:04.741 ************************************ 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.741 ************************************ 00:21:04.741 START TEST nvmf_auth_target 00:21:04.741 ************************************ 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:04.741 * Looking for test storage... 00:21:04.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.741 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.742 --rc genhtml_branch_coverage=1 00:21:04.742 --rc genhtml_function_coverage=1 00:21:04.742 --rc genhtml_legend=1 00:21:04.742 --rc geninfo_all_blocks=1 00:21:04.742 --rc geninfo_unexecuted_blocks=1 00:21:04.742 00:21:04.742 ' 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.742 --rc genhtml_branch_coverage=1 00:21:04.742 --rc genhtml_function_coverage=1 00:21:04.742 --rc genhtml_legend=1 00:21:04.742 --rc geninfo_all_blocks=1 00:21:04.742 --rc geninfo_unexecuted_blocks=1 00:21:04.742 00:21:04.742 ' 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.742 --rc genhtml_branch_coverage=1 00:21:04.742 --rc genhtml_function_coverage=1 00:21:04.742 --rc genhtml_legend=1 00:21:04.742 --rc geninfo_all_blocks=1 00:21:04.742 --rc geninfo_unexecuted_blocks=1 00:21:04.742 00:21:04.742 ' 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:04.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.742 --rc genhtml_branch_coverage=1 00:21:04.742 --rc genhtml_function_coverage=1 00:21:04.742 --rc genhtml_legend=1 00:21:04.742 --rc geninfo_all_blocks=1 00:21:04.742 --rc geninfo_unexecuted_blocks=1 00:21:04.742 00:21:04.742 ' 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.742 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.743 00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.023 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:10.024 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:10.024 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:10.024 Found net devices under 0000:86:00.0: cvl_0_0 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:10.024 Found net devices under 0000:86:00.1: cvl_0_1 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:21:10.024 00:21:10.024 --- 10.0.0.2 ping statistics --- 00:21:10.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.024 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:21:10.024 00:21:10.024 --- 10.0.0.1 ping statistics --- 00:21:10.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.024 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=349315 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 349315 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:10.024 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 349315 ']' 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=349437 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a1c72cc9f46645db67067080bfc112f1c2a9d33e642415d0 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uFu 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a1c72cc9f46645db67067080bfc112f1c2a9d33e642415d0 0 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a1c72cc9f46645db67067080bfc112f1c2a9d33e642415d0 0 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a1c72cc9f46645db67067080bfc112f1c2a9d33e642415d0 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uFu 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uFu 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.uFu 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e38d2298efe3e2018f4d68d2f93acba2324e4422d04c7288930a2121f54c664e 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ucS 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e38d2298efe3e2018f4d68d2f93acba2324e4422d04c7288930a2121f54c664e 3 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e38d2298efe3e2018f4d68d2f93acba2324e4422d04c7288930a2121f54c664e 3 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e38d2298efe3e2018f4d68d2f93acba2324e4422d04c7288930a2121f54c664e 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ucS 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ucS 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ucS 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5dc4461b9a86fc4e5f5e945fa41ee27b 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.MVE 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5dc4461b9a86fc4e5f5e945fa41ee27b 1 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5dc4461b9a86fc4e5f5e945fa41ee27b 1 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5dc4461b9a86fc4e5f5e945fa41ee27b 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.MVE 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.MVE 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.MVE 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=93f322110ff435d42971ce7331688ab12aeb5dc3222fc535 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.KAE 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 93f322110ff435d42971ce7331688ab12aeb5dc3222fc535 2 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 93f322110ff435d42971ce7331688ab12aeb5dc3222fc535 2 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=93f322110ff435d42971ce7331688ab12aeb5dc3222fc535 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.KAE 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.KAE 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.KAE 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:10.025 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:10.286 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=936527a67a8523859caccb9e31b83b26e716c7520f379903 00:21:10.286 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:10.286 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.GwY 00:21:10.286 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 936527a67a8523859caccb9e31b83b26e716c7520f379903 2 00:21:10.286 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 936527a67a8523859caccb9e31b83b26e716c7520f379903 2 00:21:10.286 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:10.286 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:10.286 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=936527a67a8523859caccb9e31b83b26e716c7520f379903 00:21:10.286 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:10.286 00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.GwY 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.GwY 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.GwY 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2484021a5f6265f454a3271c04d4f377 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.P3d 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2484021a5f6265f454a3271c04d4f377 1 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2484021a5f6265f454a3271c04d4f377 1 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2484021a5f6265f454a3271c04d4f377 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.P3d 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.P3d 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.P3d 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:10.286 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=441be0d30d88b44b1b335883a6ffd833bbc91286f59b9d7b1b7492da87346081 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.I48 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 441be0d30d88b44b1b335883a6ffd833bbc91286f59b9d7b1b7492da87346081 3 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 441be0d30d88b44b1b335883a6ffd833bbc91286f59b9d7b1b7492da87346081 3 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=441be0d30d88b44b1b335883a6ffd833bbc91286f59b9d7b1b7492da87346081 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.I48 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.I48 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.I48 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 349315 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 349315 ']' 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.287 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.547 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.547 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:10.547 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 349437 /var/tmp/host.sock 00:21:10.547 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 349437 ']' 00:21:10.547 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:10.547 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.547 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:10.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:10.547 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.547 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uFu 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uFu 00:21:10.806 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uFu 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ucS ]] 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ucS 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ucS 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ucS 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.MVE 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.MVE 00:21:11.066 00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.MVE 00:21:11.325 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.KAE ]] 00:21:11.325 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KAE 00:21:11.325 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.325 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.325 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.325 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KAE 00:21:11.325 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KAE 00:21:11.583 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:11.583 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GwY 00:21:11.583 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.583 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.583 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.583 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.GwY 00:21:11.583 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.GwY 00:21:11.843 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.P3d ]] 00:21:11.843 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P3d 00:21:11.843 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.843 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.843 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.843 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P3d 00:21:11.843 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P3d 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.I48 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.I48 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.I48 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:12.102 00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.361 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.621 00:21:12.621 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.621 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.621 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.881 { 00:21:12.881 "cntlid": 1, 00:21:12.881 "qid": 0, 00:21:12.881 "state": "enabled", 00:21:12.881 "thread": "nvmf_tgt_poll_group_000", 00:21:12.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:12.881 "listen_address": { 00:21:12.881 "trtype": "TCP", 00:21:12.881 "adrfam": "IPv4", 00:21:12.881 "traddr": "10.0.0.2", 00:21:12.881 "trsvcid": "4420" 00:21:12.881 }, 00:21:12.881 "peer_address": { 00:21:12.881 "trtype": "TCP", 00:21:12.881 "adrfam": "IPv4", 00:21:12.881 "traddr": "10.0.0.1", 00:21:12.881 "trsvcid": "48274" 00:21:12.881 }, 00:21:12.881 "auth": { 00:21:12.881 "state": "completed", 00:21:12.881 "digest": "sha256", 00:21:12.881 "dhgroup": "null" 00:21:12.881 } 00:21:12.881 } 00:21:12.881 ]' 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.881 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.140 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:13.141 00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:16.432 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.694 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.954 00:21:16.954 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.954 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.954 00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.214 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.214 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.214 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.214 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.214 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.214 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.215 { 00:21:17.215 "cntlid": 3, 00:21:17.215 "qid": 0, 00:21:17.215 "state": "enabled", 00:21:17.215 "thread": "nvmf_tgt_poll_group_000", 00:21:17.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:17.215 "listen_address": { 00:21:17.215 "trtype": "TCP", 00:21:17.215 "adrfam": "IPv4", 00:21:17.215 "traddr": "10.0.0.2", 00:21:17.215 "trsvcid": "4420" 00:21:17.215 }, 00:21:17.215 "peer_address": { 00:21:17.215 "trtype": "TCP", 00:21:17.215 "adrfam": "IPv4", 00:21:17.215 "traddr": "10.0.0.1", 00:21:17.215 "trsvcid": "53386" 00:21:17.215 }, 00:21:17.215 "auth": { 00:21:17.215 "state": "completed", 00:21:17.215 "digest": "sha256", 00:21:17.215 "dhgroup": "null" 00:21:17.215 } 00:21:17.215 } 00:21:17.215 ]' 00:21:17.215 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.215 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:17.215 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.474 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:17.474 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.474 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.474 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.474 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.474 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:17.474 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:18.042 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.301 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.301 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.301 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.301 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.301 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.301 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:18.301 00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:18.301 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:18.301 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.301 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:18.301 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:18.301 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.301 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.301 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.301 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.301 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.301 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.302 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.302 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.302 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.560 00:21:18.560 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.560 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.560 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.819 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.819 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.819 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.819 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.819 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.819 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.819 { 00:21:18.819 "cntlid": 5, 00:21:18.819 "qid": 0, 00:21:18.819 "state": "enabled", 00:21:18.819 "thread": "nvmf_tgt_poll_group_000", 00:21:18.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:18.819 "listen_address": { 00:21:18.819 "trtype": "TCP", 00:21:18.819 "adrfam": "IPv4", 00:21:18.819 "traddr": "10.0.0.2", 00:21:18.819 "trsvcid": "4420" 00:21:18.819 }, 00:21:18.819 "peer_address": { 00:21:18.819 "trtype": "TCP", 00:21:18.819 "adrfam": "IPv4", 00:21:18.819 "traddr": "10.0.0.1", 00:21:18.819 "trsvcid": "53422" 00:21:18.819 }, 00:21:18.819 "auth": { 00:21:18.819 "state": "completed", 00:21:18.819 "digest": "sha256", 00:21:18.819 "dhgroup": "null" 00:21:18.819 } 00:21:18.819 } 00:21:18.819 ]' 00:21:18.819 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.819 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:18.819 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.819 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:18.819 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.078 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.078 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.078 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.078 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:19.078 00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:19.648 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.648 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.648 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.648 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.648 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.648 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.648 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:19.648 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.912 00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.172 00:21:20.172 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.172 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.172 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.430 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.430 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.430 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.430 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.430 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.430 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.430 { 00:21:20.430 "cntlid": 7, 00:21:20.430 "qid": 0, 00:21:20.430 "state": "enabled", 00:21:20.430 "thread": "nvmf_tgt_poll_group_000", 00:21:20.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:20.430 "listen_address": { 00:21:20.430 "trtype": "TCP", 00:21:20.430 "adrfam": "IPv4", 00:21:20.430 "traddr": "10.0.0.2", 00:21:20.430 "trsvcid": "4420" 00:21:20.430 }, 00:21:20.430 "peer_address": { 00:21:20.430 "trtype": "TCP", 00:21:20.430 "adrfam": "IPv4", 00:21:20.430 "traddr": "10.0.0.1", 00:21:20.430 "trsvcid": "53446" 00:21:20.430 }, 00:21:20.430 "auth": { 00:21:20.430 "state": "completed", 00:21:20.430 "digest": "sha256", 00:21:20.430 "dhgroup": "null" 00:21:20.430 } 00:21:20.430 } 00:21:20.430 ]' 00:21:20.430 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.430 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:20.430 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.430 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:20.430 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.689 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.689 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.689 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.689 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:20.689 00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:21.258 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.258 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.258 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.258 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.258 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.258 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.258 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.258 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:21.258 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.517 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:21.776 00:21:21.776 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.776 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.776 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.035 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.035 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.035 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.035 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.035 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.035 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.035 { 00:21:22.035 "cntlid": 9, 00:21:22.035 "qid": 0, 00:21:22.035 "state": "enabled", 00:21:22.035 "thread": "nvmf_tgt_poll_group_000", 00:21:22.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:22.035 "listen_address": { 00:21:22.035 "trtype": "TCP", 00:21:22.035 "adrfam": "IPv4", 00:21:22.035 "traddr": "10.0.0.2", 00:21:22.035 "trsvcid": "4420" 00:21:22.035 }, 00:21:22.035 "peer_address": { 00:21:22.035 "trtype": "TCP", 00:21:22.035 "adrfam": "IPv4", 00:21:22.035 "traddr": "10.0.0.1", 00:21:22.035 "trsvcid": "53466" 00:21:22.035 }, 00:21:22.035 "auth": { 00:21:22.035 "state": "completed", 00:21:22.035 "digest": "sha256", 00:21:22.035 "dhgroup": "ffdhe2048" 00:21:22.035 } 00:21:22.035 } 00:21:22.035 ]' 00:21:22.035 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.035 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:22.035 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.294 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.294 00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.295 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.295 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.295 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.554 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:22.554 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:23.122 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.122 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:23.122 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.122 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.122 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.122 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.122 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:23.122 00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.122 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.381 00:21:23.640 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.640 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.640 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.641 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.641 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.641 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.641 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.641 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.641 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.641 { 00:21:23.641 "cntlid": 11, 00:21:23.641 "qid": 0, 00:21:23.641 "state": "enabled", 00:21:23.641 "thread": "nvmf_tgt_poll_group_000", 00:21:23.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:23.641 "listen_address": { 00:21:23.641 "trtype": "TCP", 00:21:23.641 "adrfam": "IPv4", 00:21:23.641 "traddr": "10.0.0.2", 00:21:23.641 "trsvcid": "4420" 00:21:23.641 }, 00:21:23.641 "peer_address": { 00:21:23.641 "trtype": "TCP", 00:21:23.641 "adrfam": "IPv4", 00:21:23.641 "traddr": "10.0.0.1", 00:21:23.641 "trsvcid": "53496" 00:21:23.641 }, 00:21:23.641 "auth": { 00:21:23.641 "state": "completed", 00:21:23.641 "digest": "sha256", 00:21:23.641 "dhgroup": "ffdhe2048" 00:21:23.641 } 00:21:23.641 } 00:21:23.641 ]' 00:21:23.641 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.641 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:23.641 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.899 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:23.899 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.899 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.899 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.899 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.158 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:24.158 00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:24.725 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.726 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.726 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.726 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.726 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.985 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.985 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.985 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.985 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.985 00:21:25.244 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.244 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.244 00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.244 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.244 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.244 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.244 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.244 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.244 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.244 { 00:21:25.244 "cntlid": 13, 00:21:25.244 "qid": 0, 00:21:25.244 "state": "enabled", 00:21:25.244 "thread": "nvmf_tgt_poll_group_000", 00:21:25.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:25.244 "listen_address": { 00:21:25.244 "trtype": "TCP", 00:21:25.244 "adrfam": "IPv4", 00:21:25.244 "traddr": "10.0.0.2", 00:21:25.244 "trsvcid": "4420" 00:21:25.244 }, 00:21:25.244 "peer_address": { 00:21:25.244 "trtype": "TCP", 00:21:25.244 "adrfam": "IPv4", 00:21:25.244 "traddr": "10.0.0.1", 00:21:25.244 "trsvcid": "36484" 00:21:25.244 }, 00:21:25.244 "auth": { 00:21:25.244 "state": "completed", 00:21:25.244 "digest": "sha256", 00:21:25.244 "dhgroup": "ffdhe2048" 00:21:25.244 } 00:21:25.244 } 00:21:25.244 ]' 00:21:25.244 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.509 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:25.509 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.509 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.509 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.509 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.509 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.509 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.768 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:25.768 00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.336 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.594 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.594 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:26.594 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.594 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:26.594 00:21:26.854 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.854 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.854 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.854 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.854 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.854 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.854 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.854 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.854 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.854 { 00:21:26.854 "cntlid": 15, 00:21:26.854 "qid": 0, 00:21:26.854 "state": "enabled", 00:21:26.854 "thread": "nvmf_tgt_poll_group_000", 00:21:26.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:26.854 "listen_address": { 00:21:26.854 "trtype": "TCP", 00:21:26.854 "adrfam": "IPv4", 00:21:26.854 "traddr": "10.0.0.2", 00:21:26.854 "trsvcid": "4420" 00:21:26.854 }, 00:21:26.854 "peer_address": { 00:21:26.854 "trtype": "TCP", 00:21:26.854 "adrfam": "IPv4", 00:21:26.854 "traddr": "10.0.0.1", 00:21:26.854 "trsvcid": "36506" 00:21:26.854 }, 00:21:26.854 "auth": { 00:21:26.854 "state": "completed", 00:21:26.854 "digest": "sha256", 00:21:26.854 "dhgroup": "ffdhe2048" 00:21:26.854 } 00:21:26.854 } 00:21:26.854 ]' 00:21:26.854 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.113 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:27.113 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.113 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:27.113 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.113 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.113 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.113 00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.373 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:27.373 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:27.948 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.948 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.948 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.948 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.948 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.948 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.948 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.948 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:27.948 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:28.206 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:28.206 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.206 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:28.206 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:28.206 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:28.206 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.206 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.206 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.206 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.206 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.206 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.207 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.207 00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.468 00:21:28.468 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.468 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.468 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.468 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.468 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.468 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.468 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.468 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.468 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.468 { 00:21:28.468 "cntlid": 17, 00:21:28.468 "qid": 0, 00:21:28.468 "state": "enabled", 00:21:28.468 "thread": "nvmf_tgt_poll_group_000", 00:21:28.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:28.468 "listen_address": { 00:21:28.468 "trtype": "TCP", 00:21:28.468 "adrfam": "IPv4", 00:21:28.468 "traddr": "10.0.0.2", 00:21:28.468 "trsvcid": "4420" 00:21:28.468 }, 00:21:28.468 "peer_address": { 00:21:28.468 "trtype": "TCP", 00:21:28.468 "adrfam": "IPv4", 00:21:28.468 "traddr": "10.0.0.1", 00:21:28.468 "trsvcid": "36520" 00:21:28.468 }, 00:21:28.468 "auth": { 00:21:28.468 "state": "completed", 00:21:28.468 "digest": "sha256", 00:21:28.468 "dhgroup": "ffdhe3072" 00:21:28.468 } 00:21:28.468 } 00:21:28.469 ]' 00:21:28.469 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.734 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:28.734 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.734 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:28.734 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.734 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.734 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.734 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.993 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:28.993 00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:29.561 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.561 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.562 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.562 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.562 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.562 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.562 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:29.562 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.821 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.079 00:21:30.079 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.079 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.079 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.079 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.079 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.079 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.079 00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.079 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.079 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.079 { 00:21:30.079 "cntlid": 19, 00:21:30.079 "qid": 0, 00:21:30.079 "state": "enabled", 00:21:30.079 "thread": "nvmf_tgt_poll_group_000", 00:21:30.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:30.079 "listen_address": { 00:21:30.079 "trtype": "TCP", 00:21:30.079 "adrfam": "IPv4", 00:21:30.079 "traddr": "10.0.0.2", 00:21:30.079 "trsvcid": "4420" 00:21:30.079 }, 00:21:30.079 "peer_address": { 00:21:30.079 "trtype": "TCP", 00:21:30.079 "adrfam": "IPv4", 00:21:30.079 "traddr": "10.0.0.1", 00:21:30.079 "trsvcid": "36542" 00:21:30.079 }, 00:21:30.079 "auth": { 00:21:30.079 "state": "completed", 00:21:30.079 "digest": "sha256", 00:21:30.079 "dhgroup": "ffdhe3072" 00:21:30.079 } 00:21:30.079 } 00:21:30.079 ]' 00:21:30.079 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.343 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:30.343 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.343 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:30.343 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.343 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.343 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.343 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.603 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:30.603 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:31.172 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.172 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.172 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.172 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.172 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.172 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.172 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:31.172 00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:31.431 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:31.431 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.431 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:31.431 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:31.431 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.431 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.432 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.432 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.432 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.432 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.432 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.432 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.432 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.691 00:21:31.691 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.691 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.691 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.950 { 00:21:31.950 "cntlid": 21, 00:21:31.950 "qid": 0, 00:21:31.950 "state": "enabled", 00:21:31.950 "thread": "nvmf_tgt_poll_group_000", 00:21:31.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:31.950 "listen_address": { 00:21:31.950 "trtype": "TCP", 00:21:31.950 "adrfam": "IPv4", 00:21:31.950 "traddr": "10.0.0.2", 00:21:31.950 "trsvcid": "4420" 00:21:31.950 }, 00:21:31.950 "peer_address": { 00:21:31.950 "trtype": "TCP", 00:21:31.950 "adrfam": "IPv4", 00:21:31.950 "traddr": "10.0.0.1", 00:21:31.950 "trsvcid": "36568" 00:21:31.950 }, 00:21:31.950 "auth": { 00:21:31.950 "state": "completed", 00:21:31.950 "digest": "sha256", 00:21:31.950 "dhgroup": "ffdhe3072" 00:21:31.950 } 00:21:31.950 } 00:21:31.950 ]' 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.950 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.209 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:32.209 00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:32.777 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.777 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:32.777 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.777 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.777 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.777 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.777 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:32.777 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.037 00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.296 00:21:33.297 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.297 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.297 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.557 { 00:21:33.557 "cntlid": 23, 00:21:33.557 "qid": 0, 00:21:33.557 "state": "enabled", 00:21:33.557 "thread": "nvmf_tgt_poll_group_000", 00:21:33.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:33.557 "listen_address": { 00:21:33.557 "trtype": "TCP", 00:21:33.557 "adrfam": "IPv4", 00:21:33.557 "traddr": "10.0.0.2", 00:21:33.557 "trsvcid": "4420" 00:21:33.557 }, 00:21:33.557 "peer_address": { 00:21:33.557 "trtype": "TCP", 00:21:33.557 "adrfam": "IPv4", 00:21:33.557 "traddr": "10.0.0.1", 00:21:33.557 "trsvcid": "36590" 00:21:33.557 }, 00:21:33.557 "auth": { 00:21:33.557 "state": "completed", 00:21:33.557 "digest": "sha256", 00:21:33.557 "dhgroup": "ffdhe3072" 00:21:33.557 } 00:21:33.557 } 00:21:33.557 ]' 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.557 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.825 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:33.825 00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:34.392 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.393 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.393 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.393 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.393 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.393 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.393 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.393 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:34.393 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:34.650 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:34.650 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.650 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:34.650 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:34.650 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.650 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.650 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.650 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.651 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.651 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.651 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.651 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.651 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.909 00:21:34.909 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.909 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.909 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.168 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.168 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.168 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.168 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.168 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.168 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.168 { 00:21:35.168 "cntlid": 25, 00:21:35.168 "qid": 0, 00:21:35.168 "state": "enabled", 00:21:35.168 "thread": "nvmf_tgt_poll_group_000", 00:21:35.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:35.168 "listen_address": { 00:21:35.168 "trtype": "TCP", 00:21:35.168 "adrfam": "IPv4", 00:21:35.168 "traddr": "10.0.0.2", 00:21:35.168 "trsvcid": "4420" 00:21:35.168 }, 00:21:35.168 "peer_address": { 00:21:35.168 "trtype": "TCP", 00:21:35.168 "adrfam": "IPv4", 00:21:35.168 "traddr": "10.0.0.1", 00:21:35.168 "trsvcid": "38516" 00:21:35.168 }, 00:21:35.168 "auth": { 00:21:35.168 "state": "completed", 00:21:35.168 "digest": "sha256", 00:21:35.168 "dhgroup": "ffdhe4096" 00:21:35.168 } 00:21:35.168 } 00:21:35.168 ]' 00:21:35.168 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.168 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.168 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.168 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:35.168 00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.168 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.168 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.168 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.427 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:35.427 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:35.991 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.991 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:35.991 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.991 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.991 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.991 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.991 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:35.991 00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.250 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.509 00:21:36.509 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.509 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.509 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.795 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.795 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.795 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.795 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.795 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.795 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.795 { 00:21:36.795 "cntlid": 27, 00:21:36.795 "qid": 0, 00:21:36.795 "state": "enabled", 00:21:36.795 "thread": "nvmf_tgt_poll_group_000", 00:21:36.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:36.795 "listen_address": { 00:21:36.795 "trtype": "TCP", 00:21:36.795 "adrfam": "IPv4", 00:21:36.795 "traddr": "10.0.0.2", 00:21:36.795 "trsvcid": "4420" 00:21:36.795 }, 00:21:36.795 "peer_address": { 00:21:36.795 "trtype": "TCP", 00:21:36.796 "adrfam": "IPv4", 00:21:36.796 "traddr": "10.0.0.1", 00:21:36.796 "trsvcid": "38554" 00:21:36.796 }, 00:21:36.796 "auth": { 00:21:36.796 "state": "completed", 00:21:36.796 "digest": "sha256", 00:21:36.796 "dhgroup": "ffdhe4096" 00:21:36.796 } 00:21:36.796 } 00:21:36.796 ]' 00:21:36.796 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.796 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:36.796 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.796 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.796 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.796 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.796 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.796 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.055 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:37.055 00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:37.621 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.621 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:37.621 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.621 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.621 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.621 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.621 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:37.621 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.880 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.139 00:21:38.139 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.139 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.139 00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.399 { 00:21:38.399 "cntlid": 29, 00:21:38.399 "qid": 0, 00:21:38.399 "state": "enabled", 00:21:38.399 "thread": "nvmf_tgt_poll_group_000", 00:21:38.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:38.399 "listen_address": { 00:21:38.399 "trtype": "TCP", 00:21:38.399 "adrfam": "IPv4", 00:21:38.399 "traddr": "10.0.0.2", 00:21:38.399 "trsvcid": "4420" 00:21:38.399 }, 00:21:38.399 "peer_address": { 00:21:38.399 "trtype": "TCP", 00:21:38.399 "adrfam": "IPv4", 00:21:38.399 "traddr": "10.0.0.1", 00:21:38.399 "trsvcid": "38586" 00:21:38.399 }, 00:21:38.399 "auth": { 00:21:38.399 "state": "completed", 00:21:38.399 "digest": "sha256", 00:21:38.399 "dhgroup": "ffdhe4096" 00:21:38.399 } 00:21:38.399 } 00:21:38.399 ]' 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.399 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.658 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:38.658 00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:39.226 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.226 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:39.226 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.226 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.226 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.226 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.226 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:39.226 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.485 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.744 00:21:39.744 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.744 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.745 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.004 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.004 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.005 { 00:21:40.005 "cntlid": 31, 00:21:40.005 "qid": 0, 00:21:40.005 "state": "enabled", 00:21:40.005 "thread": "nvmf_tgt_poll_group_000", 00:21:40.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:40.005 "listen_address": { 00:21:40.005 "trtype": "TCP", 00:21:40.005 "adrfam": "IPv4", 00:21:40.005 "traddr": "10.0.0.2", 00:21:40.005 "trsvcid": "4420" 00:21:40.005 }, 00:21:40.005 "peer_address": { 00:21:40.005 "trtype": "TCP", 00:21:40.005 "adrfam": "IPv4", 00:21:40.005 "traddr": "10.0.0.1", 00:21:40.005 "trsvcid": "38624" 00:21:40.005 }, 00:21:40.005 "auth": { 00:21:40.005 "state": "completed", 00:21:40.005 "digest": "sha256", 00:21:40.005 "dhgroup": "ffdhe4096" 00:21:40.005 } 00:21:40.005 } 00:21:40.005 ]' 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.005 00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.266 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:40.266 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:40.832 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.832 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.832 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.832 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.832 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.832 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.832 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.832 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:40.832 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.091 00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.349 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.608 { 00:21:41.608 "cntlid": 33, 00:21:41.608 "qid": 0, 00:21:41.608 "state": "enabled", 00:21:41.608 "thread": "nvmf_tgt_poll_group_000", 00:21:41.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:41.608 "listen_address": { 00:21:41.608 "trtype": "TCP", 00:21:41.608 "adrfam": "IPv4", 00:21:41.608 "traddr": "10.0.0.2", 00:21:41.608 "trsvcid": "4420" 00:21:41.608 }, 00:21:41.608 "peer_address": { 00:21:41.608 "trtype": "TCP", 00:21:41.608 "adrfam": "IPv4", 00:21:41.608 "traddr": "10.0.0.1", 00:21:41.608 "trsvcid": "38668" 00:21:41.608 }, 00:21:41.608 "auth": { 00:21:41.608 "state": "completed", 00:21:41.608 "digest": "sha256", 00:21:41.608 "dhgroup": "ffdhe6144" 00:21:41.608 } 00:21:41.608 } 00:21:41.608 ]' 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:41.608 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.869 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.869 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.869 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.869 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.869 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.128 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:42.128 00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:42.697 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.697 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.697 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.697 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.697 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.697 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.697 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:42.697 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.957 00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.216 00:21:43.216 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.216 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.216 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.481 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.481 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.481 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.481 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.481 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.481 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.481 { 00:21:43.481 "cntlid": 35, 00:21:43.481 "qid": 0, 00:21:43.481 "state": "enabled", 00:21:43.481 "thread": "nvmf_tgt_poll_group_000", 00:21:43.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:43.481 "listen_address": { 00:21:43.481 "trtype": "TCP", 00:21:43.481 "adrfam": "IPv4", 00:21:43.481 "traddr": "10.0.0.2", 00:21:43.481 "trsvcid": "4420" 00:21:43.481 }, 00:21:43.481 "peer_address": { 00:21:43.481 "trtype": "TCP", 00:21:43.481 "adrfam": "IPv4", 00:21:43.481 "traddr": "10.0.0.1", 00:21:43.481 "trsvcid": "38704" 00:21:43.481 }, 00:21:43.481 "auth": { 00:21:43.481 "state": "completed", 00:21:43.481 "digest": "sha256", 00:21:43.481 "dhgroup": "ffdhe6144" 00:21:43.481 } 00:21:43.481 } 00:21:43.481 ]' 00:21:43.481 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.481 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:43.481 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.481 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.482 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.482 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.482 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.482 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.750 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:43.750 00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:44.318 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.318 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:44.318 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.318 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.318 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.318 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.318 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:44.319 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.578 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.840 00:21:44.840 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.840 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.840 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.101 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.101 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.101 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.101 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.101 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.101 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.101 { 00:21:45.101 "cntlid": 37, 00:21:45.101 "qid": 0, 00:21:45.101 "state": "enabled", 00:21:45.101 "thread": "nvmf_tgt_poll_group_000", 00:21:45.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:45.101 "listen_address": { 00:21:45.101 "trtype": "TCP", 00:21:45.101 "adrfam": "IPv4", 00:21:45.101 "traddr": "10.0.0.2", 00:21:45.101 "trsvcid": "4420" 00:21:45.101 }, 00:21:45.101 "peer_address": { 00:21:45.101 "trtype": "TCP", 00:21:45.101 "adrfam": "IPv4", 00:21:45.101 "traddr": "10.0.0.1", 00:21:45.101 "trsvcid": "56340" 00:21:45.101 }, 00:21:45.101 "auth": { 00:21:45.101 "state": "completed", 00:21:45.101 "digest": "sha256", 00:21:45.101 "dhgroup": "ffdhe6144" 00:21:45.101 } 00:21:45.101 } 00:21:45.101 ]' 00:21:45.101 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.101 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.101 00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.101 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:45.101 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.360 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.360 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.360 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.360 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:45.360 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:45.926 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.926 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:45.926 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.926 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.185 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.185 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.185 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:46.185 00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:46.185 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:46.185 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.185 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:46.185 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:46.185 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:46.185 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.185 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:46.186 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.186 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.186 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.186 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:46.186 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.186 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:46.753 00:21:46.753 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.753 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.753 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.753 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.753 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.753 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.753 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.753 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.754 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.754 { 00:21:46.754 "cntlid": 39, 00:21:46.754 "qid": 0, 00:21:46.754 "state": "enabled", 00:21:46.754 "thread": "nvmf_tgt_poll_group_000", 00:21:46.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:46.754 "listen_address": { 00:21:46.754 "trtype": "TCP", 00:21:46.754 "adrfam": "IPv4", 00:21:46.754 "traddr": "10.0.0.2", 00:21:46.754 "trsvcid": "4420" 00:21:46.754 }, 00:21:46.754 "peer_address": { 00:21:46.754 "trtype": "TCP", 00:21:46.754 "adrfam": "IPv4", 00:21:46.754 "traddr": "10.0.0.1", 00:21:46.754 "trsvcid": "56362" 00:21:46.754 }, 00:21:46.754 "auth": { 00:21:46.754 "state": "completed", 00:21:46.754 "digest": "sha256", 00:21:46.754 "dhgroup": "ffdhe6144" 00:21:46.754 } 00:21:46.754 } 00:21:46.754 ]' 00:21:46.754 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.012 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.012 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.012 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.012 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.012 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.013 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.013 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.271 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:47.271 00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.841 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.842 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.842 00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.407 00:21:48.407 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.407 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.407 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.666 { 00:21:48.666 "cntlid": 41, 00:21:48.666 "qid": 0, 00:21:48.666 "state": "enabled", 00:21:48.666 "thread": "nvmf_tgt_poll_group_000", 00:21:48.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:48.666 "listen_address": { 00:21:48.666 "trtype": "TCP", 00:21:48.666 "adrfam": "IPv4", 00:21:48.666 "traddr": "10.0.0.2", 00:21:48.666 "trsvcid": "4420" 00:21:48.666 }, 00:21:48.666 "peer_address": { 00:21:48.666 "trtype": "TCP", 00:21:48.666 "adrfam": "IPv4", 00:21:48.666 "traddr": "10.0.0.1", 00:21:48.666 "trsvcid": "56392" 00:21:48.666 }, 00:21:48.666 "auth": { 00:21:48.666 "state": "completed", 00:21:48.666 "digest": "sha256", 00:21:48.666 "dhgroup": "ffdhe8192" 00:21:48.666 } 00:21:48.666 } 00:21:48.666 ]' 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.666 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.925 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:48.925 00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:49.494 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.494 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:49.494 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.494 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.494 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.494 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.494 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:49.494 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.753 00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.322 00:21:50.322 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.322 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.322 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.322 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.322 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.322 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.322 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.581 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.581 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.581 { 00:21:50.581 "cntlid": 43, 00:21:50.581 "qid": 0, 00:21:50.581 "state": "enabled", 00:21:50.581 "thread": "nvmf_tgt_poll_group_000", 00:21:50.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:50.581 "listen_address": { 00:21:50.581 "trtype": "TCP", 00:21:50.581 "adrfam": "IPv4", 00:21:50.581 "traddr": "10.0.0.2", 00:21:50.581 "trsvcid": "4420" 00:21:50.581 }, 00:21:50.581 "peer_address": { 00:21:50.581 "trtype": "TCP", 00:21:50.581 "adrfam": "IPv4", 00:21:50.581 "traddr": "10.0.0.1", 00:21:50.581 "trsvcid": "56410" 00:21:50.581 }, 00:21:50.581 "auth": { 00:21:50.581 "state": "completed", 00:21:50.581 "digest": "sha256", 00:21:50.581 "dhgroup": "ffdhe8192" 00:21:50.581 } 00:21:50.581 } 00:21:50.581 ]' 00:21:50.581 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.581 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.581 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.581 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.581 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.581 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.581 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.582 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.840 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:50.840 00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:51.412 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.412 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.412 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.412 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.412 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.412 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.412 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:51.412 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.671 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.929 00:21:52.187 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.187 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.187 00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.187 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.187 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.187 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.187 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.187 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.187 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.187 { 00:21:52.187 "cntlid": 45, 00:21:52.187 "qid": 0, 00:21:52.187 "state": "enabled", 00:21:52.187 "thread": "nvmf_tgt_poll_group_000", 00:21:52.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:52.187 "listen_address": { 00:21:52.187 "trtype": "TCP", 00:21:52.187 "adrfam": "IPv4", 00:21:52.187 "traddr": "10.0.0.2", 00:21:52.187 "trsvcid": "4420" 00:21:52.187 }, 00:21:52.187 "peer_address": { 00:21:52.187 "trtype": "TCP", 00:21:52.187 "adrfam": "IPv4", 00:21:52.187 "traddr": "10.0.0.1", 00:21:52.187 "trsvcid": "56446" 00:21:52.187 }, 00:21:52.187 "auth": { 00:21:52.187 "state": "completed", 00:21:52.187 "digest": "sha256", 00:21:52.187 "dhgroup": "ffdhe8192" 00:21:52.187 } 00:21:52.187 } 00:21:52.187 ]' 00:21:52.187 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.464 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.464 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.464 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.464 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.464 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.464 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.464 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.723 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:52.723 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:53.291 00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.291 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.291 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.291 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.291 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.292 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.860 00:21:53.860 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.860 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.860 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.119 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.119 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.119 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.119 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.119 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.119 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.119 { 00:21:54.119 "cntlid": 47, 00:21:54.119 "qid": 0, 00:21:54.119 "state": "enabled", 00:21:54.119 "thread": "nvmf_tgt_poll_group_000", 00:21:54.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:54.119 "listen_address": { 00:21:54.119 "trtype": "TCP", 00:21:54.119 "adrfam": "IPv4", 00:21:54.119 "traddr": "10.0.0.2", 00:21:54.119 "trsvcid": "4420" 00:21:54.119 }, 00:21:54.119 "peer_address": { 00:21:54.119 "trtype": "TCP", 00:21:54.119 "adrfam": "IPv4", 00:21:54.119 "traddr": "10.0.0.1", 00:21:54.119 "trsvcid": "56482" 00:21:54.119 }, 00:21:54.119 "auth": { 00:21:54.119 "state": "completed", 00:21:54.119 "digest": "sha256", 00:21:54.119 "dhgroup": "ffdhe8192" 00:21:54.119 } 00:21:54.119 } 00:21:54.119 ]' 00:21:54.119 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.119 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.120 00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.120 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.120 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.378 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.378 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.378 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.378 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:54.378 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:21:54.943 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.943 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:54.943 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.943 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.943 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.943 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:54.943 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.943 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.943 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:54.943 00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.202 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.461 00:21:55.461 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.461 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.461 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.735 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.735 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.735 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.735 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.735 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.735 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.735 { 00:21:55.735 "cntlid": 49, 00:21:55.735 "qid": 0, 00:21:55.735 "state": "enabled", 00:21:55.735 "thread": "nvmf_tgt_poll_group_000", 00:21:55.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:55.736 "listen_address": { 00:21:55.736 "trtype": "TCP", 00:21:55.736 "adrfam": "IPv4", 00:21:55.736 "traddr": "10.0.0.2", 00:21:55.736 "trsvcid": "4420" 00:21:55.736 }, 00:21:55.736 "peer_address": { 00:21:55.736 "trtype": "TCP", 00:21:55.736 "adrfam": "IPv4", 00:21:55.736 "traddr": "10.0.0.1", 00:21:55.736 "trsvcid": "41850" 00:21:55.736 }, 00:21:55.736 "auth": { 00:21:55.736 "state": "completed", 00:21:55.736 "digest": "sha384", 00:21:55.736 "dhgroup": "null" 00:21:55.736 } 00:21:55.736 } 00:21:55.736 ]' 00:21:55.736 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.736 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:55.736 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.736 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:55.736 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.736 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.736 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.736 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.994 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:55.994 00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:21:56.560 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.560 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:56.560 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.560 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.560 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.560 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.560 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:56.560 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.819 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.820 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.078 00:21:57.078 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.078 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.078 00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.337 { 00:21:57.337 "cntlid": 51, 00:21:57.337 "qid": 0, 00:21:57.337 "state": "enabled", 00:21:57.337 "thread": "nvmf_tgt_poll_group_000", 00:21:57.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:57.337 "listen_address": { 00:21:57.337 "trtype": "TCP", 00:21:57.337 "adrfam": "IPv4", 00:21:57.337 "traddr": "10.0.0.2", 00:21:57.337 "trsvcid": "4420" 00:21:57.337 }, 00:21:57.337 "peer_address": { 00:21:57.337 "trtype": "TCP", 00:21:57.337 "adrfam": "IPv4", 00:21:57.337 "traddr": "10.0.0.1", 00:21:57.337 "trsvcid": "41868" 00:21:57.337 }, 00:21:57.337 "auth": { 00:21:57.337 "state": "completed", 00:21:57.337 "digest": "sha384", 00:21:57.337 "dhgroup": "null" 00:21:57.337 } 00:21:57.337 } 00:21:57.337 ]' 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.337 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.596 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:57.596 00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:21:58.163 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.163 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:58.163 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.163 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.163 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.163 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.164 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:58.164 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.490 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.758 00:21:58.758 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.758 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.758 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.039 { 00:21:59.039 "cntlid": 53, 00:21:59.039 "qid": 0, 00:21:59.039 "state": "enabled", 00:21:59.039 "thread": "nvmf_tgt_poll_group_000", 00:21:59.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:21:59.039 "listen_address": { 00:21:59.039 "trtype": "TCP", 00:21:59.039 "adrfam": "IPv4", 00:21:59.039 "traddr": "10.0.0.2", 00:21:59.039 "trsvcid": "4420" 00:21:59.039 }, 00:21:59.039 "peer_address": { 00:21:59.039 "trtype": "TCP", 00:21:59.039 "adrfam": "IPv4", 00:21:59.039 "traddr": "10.0.0.1", 00:21:59.039 "trsvcid": "41890" 00:21:59.039 }, 00:21:59.039 "auth": { 00:21:59.039 "state": "completed", 00:21:59.039 "digest": "sha384", 00:21:59.039 "dhgroup": "null" 00:21:59.039 } 00:21:59.039 } 00:21:59.039 ]' 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.039 00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.316 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:59.316 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:21:59.922 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.922 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:59.922 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.922 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.922 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.922 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.922 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:59.922 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.289 00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.289 00:22:00.289 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.289 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.289 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.567 { 00:22:00.567 "cntlid": 55, 00:22:00.567 "qid": 0, 00:22:00.567 "state": "enabled", 00:22:00.567 "thread": "nvmf_tgt_poll_group_000", 00:22:00.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:00.567 "listen_address": { 00:22:00.567 "trtype": "TCP", 00:22:00.567 "adrfam": "IPv4", 00:22:00.567 "traddr": "10.0.0.2", 00:22:00.567 "trsvcid": "4420" 00:22:00.567 }, 00:22:00.567 "peer_address": { 00:22:00.567 "trtype": "TCP", 00:22:00.567 "adrfam": "IPv4", 00:22:00.567 "traddr": "10.0.0.1", 00:22:00.567 "trsvcid": "41924" 00:22:00.567 }, 00:22:00.567 "auth": { 00:22:00.567 "state": "completed", 00:22:00.567 "digest": "sha384", 00:22:00.567 "dhgroup": "null" 00:22:00.567 } 00:22:00.567 } 00:22:00.567 ]' 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.567 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.844 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:00.844 00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:01.428 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.428 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.428 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.428 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.428 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.428 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.428 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.428 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:01.428 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:01.687 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:01.687 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.687 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:01.687 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:01.687 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:01.687 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.687 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.687 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.687 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.687 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.687 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.688 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.688 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.946 00:22:01.946 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.946 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.946 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.206 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.206 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.206 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.206 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.206 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.206 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.206 { 00:22:02.206 "cntlid": 57, 00:22:02.206 "qid": 0, 00:22:02.206 "state": "enabled", 00:22:02.206 "thread": "nvmf_tgt_poll_group_000", 00:22:02.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:02.206 "listen_address": { 00:22:02.206 "trtype": "TCP", 00:22:02.206 "adrfam": "IPv4", 00:22:02.206 "traddr": "10.0.0.2", 00:22:02.206 "trsvcid": "4420" 00:22:02.206 }, 00:22:02.206 "peer_address": { 00:22:02.206 "trtype": "TCP", 00:22:02.206 "adrfam": "IPv4", 00:22:02.206 "traddr": "10.0.0.1", 00:22:02.206 "trsvcid": "41940" 00:22:02.206 }, 00:22:02.206 "auth": { 00:22:02.206 "state": "completed", 00:22:02.206 "digest": "sha384", 00:22:02.206 "dhgroup": "ffdhe2048" 00:22:02.206 } 00:22:02.206 } 00:22:02.206 ]' 00:22:02.206 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.206 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:02.206 00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.206 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:02.206 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.206 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.206 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.207 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.466 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:02.466 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:03.032 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.033 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:03.033 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.033 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.033 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.033 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.033 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:03.033 00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.292 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.551 00:22:03.551 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.551 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.551 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.810 { 00:22:03.810 "cntlid": 59, 00:22:03.810 "qid": 0, 00:22:03.810 "state": "enabled", 00:22:03.810 "thread": "nvmf_tgt_poll_group_000", 00:22:03.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:03.810 "listen_address": { 00:22:03.810 "trtype": "TCP", 00:22:03.810 "adrfam": "IPv4", 00:22:03.810 "traddr": "10.0.0.2", 00:22:03.810 "trsvcid": "4420" 00:22:03.810 }, 00:22:03.810 "peer_address": { 00:22:03.810 "trtype": "TCP", 00:22:03.810 "adrfam": "IPv4", 00:22:03.810 "traddr": "10.0.0.1", 00:22:03.810 "trsvcid": "41976" 00:22:03.810 }, 00:22:03.810 "auth": { 00:22:03.810 "state": "completed", 00:22:03.810 "digest": "sha384", 00:22:03.810 "dhgroup": "ffdhe2048" 00:22:03.810 } 00:22:03.810 } 00:22:03.810 ]' 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.810 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.068 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:04.068 00:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:04.636 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.636 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.636 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.636 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.636 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.636 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.636 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:04.636 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.895 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.154 00:22:05.154 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.154 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.154 00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.413 { 00:22:05.413 "cntlid": 61, 00:22:05.413 "qid": 0, 00:22:05.413 "state": "enabled", 00:22:05.413 "thread": "nvmf_tgt_poll_group_000", 00:22:05.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:05.413 "listen_address": { 00:22:05.413 "trtype": "TCP", 00:22:05.413 "adrfam": "IPv4", 00:22:05.413 "traddr": "10.0.0.2", 00:22:05.413 "trsvcid": "4420" 00:22:05.413 }, 00:22:05.413 "peer_address": { 00:22:05.413 "trtype": "TCP", 00:22:05.413 "adrfam": "IPv4", 00:22:05.413 "traddr": "10.0.0.1", 00:22:05.413 "trsvcid": "53050" 00:22:05.413 }, 00:22:05.413 "auth": { 00:22:05.413 "state": "completed", 00:22:05.413 "digest": "sha384", 00:22:05.413 "dhgroup": "ffdhe2048" 00:22:05.413 } 00:22:05.413 } 00:22:05.413 ]' 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.413 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.673 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:05.673 00:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:06.240 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.240 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:06.240 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.240 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.240 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.240 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.240 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:06.240 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.499 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:06.757 00:22:06.757 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.757 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.757 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.016 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.016 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.016 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.016 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.016 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.016 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.016 { 00:22:07.016 "cntlid": 63, 00:22:07.016 "qid": 0, 00:22:07.017 "state": "enabled", 00:22:07.017 "thread": "nvmf_tgt_poll_group_000", 00:22:07.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:07.017 "listen_address": { 00:22:07.017 "trtype": "TCP", 00:22:07.017 "adrfam": "IPv4", 00:22:07.017 "traddr": "10.0.0.2", 00:22:07.017 "trsvcid": "4420" 00:22:07.017 }, 00:22:07.017 "peer_address": { 00:22:07.017 "trtype": "TCP", 00:22:07.017 "adrfam": "IPv4", 00:22:07.017 "traddr": "10.0.0.1", 00:22:07.017 "trsvcid": "53080" 00:22:07.017 }, 00:22:07.017 "auth": { 00:22:07.017 "state": "completed", 00:22:07.017 "digest": "sha384", 00:22:07.017 "dhgroup": "ffdhe2048" 00:22:07.017 } 00:22:07.017 } 00:22:07.017 ]' 00:22:07.017 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.017 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:07.017 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.017 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:07.017 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.017 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.017 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.017 00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.276 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:07.276 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:07.843 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.843 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:07.843 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.843 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.843 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.843 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.843 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.843 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:07.843 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.102 00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.361 00:22:08.361 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.361 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.361 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.620 { 00:22:08.620 "cntlid": 65, 00:22:08.620 "qid": 0, 00:22:08.620 "state": "enabled", 00:22:08.620 "thread": "nvmf_tgt_poll_group_000", 00:22:08.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:08.620 "listen_address": { 00:22:08.620 "trtype": "TCP", 00:22:08.620 "adrfam": "IPv4", 00:22:08.620 "traddr": "10.0.0.2", 00:22:08.620 "trsvcid": "4420" 00:22:08.620 }, 00:22:08.620 "peer_address": { 00:22:08.620 "trtype": "TCP", 00:22:08.620 "adrfam": "IPv4", 00:22:08.620 "traddr": "10.0.0.1", 00:22:08.620 "trsvcid": "53110" 00:22:08.620 }, 00:22:08.620 "auth": { 00:22:08.620 "state": "completed", 00:22:08.620 "digest": "sha384", 00:22:08.620 "dhgroup": "ffdhe3072" 00:22:08.620 } 00:22:08.620 } 00:22:08.620 ]' 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.620 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.880 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:08.880 00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:09.447 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.447 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:09.447 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.447 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.447 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.447 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.447 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:09.447 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:09.706 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:09.706 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.706 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:09.706 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:09.706 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:09.706 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.707 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.707 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.707 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.707 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.707 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.707 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.707 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.966 00:22:09.966 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.966 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.966 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.225 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.225 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.225 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.225 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.225 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.225 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.225 { 00:22:10.225 "cntlid": 67, 00:22:10.225 "qid": 0, 00:22:10.225 "state": "enabled", 00:22:10.225 "thread": "nvmf_tgt_poll_group_000", 00:22:10.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:10.225 "listen_address": { 00:22:10.225 "trtype": "TCP", 00:22:10.225 "adrfam": "IPv4", 00:22:10.225 "traddr": "10.0.0.2", 00:22:10.225 "trsvcid": "4420" 00:22:10.225 }, 00:22:10.225 "peer_address": { 00:22:10.225 "trtype": "TCP", 00:22:10.225 "adrfam": "IPv4", 00:22:10.225 "traddr": "10.0.0.1", 00:22:10.225 "trsvcid": "53150" 00:22:10.225 }, 00:22:10.225 "auth": { 00:22:10.225 "state": "completed", 00:22:10.225 "digest": "sha384", 00:22:10.225 "dhgroup": "ffdhe3072" 00:22:10.225 } 00:22:10.225 } 00:22:10.225 ]' 00:22:10.225 00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.225 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:10.225 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.225 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:10.225 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.225 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.225 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.225 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.484 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:10.484 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:11.051 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.051 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.051 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.051 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.051 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.051 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.051 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:11.051 00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.311 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.570 00:22:11.570 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.570 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.570 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.830 { 00:22:11.830 "cntlid": 69, 00:22:11.830 "qid": 0, 00:22:11.830 "state": "enabled", 00:22:11.830 "thread": "nvmf_tgt_poll_group_000", 00:22:11.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:11.830 "listen_address": { 00:22:11.830 "trtype": "TCP", 00:22:11.830 "adrfam": "IPv4", 00:22:11.830 "traddr": "10.0.0.2", 00:22:11.830 "trsvcid": "4420" 00:22:11.830 }, 00:22:11.830 "peer_address": { 00:22:11.830 "trtype": "TCP", 00:22:11.830 "adrfam": "IPv4", 00:22:11.830 "traddr": "10.0.0.1", 00:22:11.830 "trsvcid": "53176" 00:22:11.830 }, 00:22:11.830 "auth": { 00:22:11.830 "state": "completed", 00:22:11.830 "digest": "sha384", 00:22:11.830 "dhgroup": "ffdhe3072" 00:22:11.830 } 00:22:11.830 } 00:22:11.830 ]' 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.830 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.094 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:12.094 00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:12.667 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.667 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:12.667 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.667 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.667 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.667 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.667 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:12.667 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.931 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.192 00:22:13.192 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.192 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.192 00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.451 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.451 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.451 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.451 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.451 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.451 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.451 { 00:22:13.451 "cntlid": 71, 00:22:13.451 "qid": 0, 00:22:13.451 "state": "enabled", 00:22:13.451 "thread": "nvmf_tgt_poll_group_000", 00:22:13.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:13.451 "listen_address": { 00:22:13.452 "trtype": "TCP", 00:22:13.452 "adrfam": "IPv4", 00:22:13.452 "traddr": "10.0.0.2", 00:22:13.452 "trsvcid": "4420" 00:22:13.452 }, 00:22:13.452 "peer_address": { 00:22:13.452 "trtype": "TCP", 00:22:13.452 "adrfam": "IPv4", 00:22:13.452 "traddr": "10.0.0.1", 00:22:13.452 "trsvcid": "53208" 00:22:13.452 }, 00:22:13.452 "auth": { 00:22:13.452 "state": "completed", 00:22:13.452 "digest": "sha384", 00:22:13.452 "dhgroup": "ffdhe3072" 00:22:13.452 } 00:22:13.452 } 00:22:13.452 ]' 00:22:13.452 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.452 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.452 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.452 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:13.452 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.452 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.452 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.452 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.712 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:13.712 00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:14.281 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.281 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.281 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.281 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.282 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.282 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.282 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.282 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:14.282 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.541 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:14.800 00:22:14.800 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.800 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.800 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.084 { 00:22:15.084 "cntlid": 73, 00:22:15.084 "qid": 0, 00:22:15.084 "state": "enabled", 00:22:15.084 "thread": "nvmf_tgt_poll_group_000", 00:22:15.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:15.084 "listen_address": { 00:22:15.084 "trtype": "TCP", 00:22:15.084 "adrfam": "IPv4", 00:22:15.084 "traddr": "10.0.0.2", 00:22:15.084 "trsvcid": "4420" 00:22:15.084 }, 00:22:15.084 "peer_address": { 00:22:15.084 "trtype": "TCP", 00:22:15.084 "adrfam": "IPv4", 00:22:15.084 "traddr": "10.0.0.1", 00:22:15.084 "trsvcid": "36284" 00:22:15.084 }, 00:22:15.084 "auth": { 00:22:15.084 "state": "completed", 00:22:15.084 "digest": "sha384", 00:22:15.084 "dhgroup": "ffdhe4096" 00:22:15.084 } 00:22:15.084 } 00:22:15.084 ]' 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.084 00:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.344 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:15.344 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:15.913 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.913 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:15.913 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.913 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.913 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.913 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.913 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:15.913 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.172 00:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.431 00:22:16.431 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.431 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.431 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.691 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.691 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.692 { 00:22:16.692 "cntlid": 75, 00:22:16.692 "qid": 0, 00:22:16.692 "state": "enabled", 00:22:16.692 "thread": "nvmf_tgt_poll_group_000", 00:22:16.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:16.692 "listen_address": { 00:22:16.692 "trtype": "TCP", 00:22:16.692 "adrfam": "IPv4", 00:22:16.692 "traddr": "10.0.0.2", 00:22:16.692 "trsvcid": "4420" 00:22:16.692 }, 00:22:16.692 "peer_address": { 00:22:16.692 "trtype": "TCP", 00:22:16.692 "adrfam": "IPv4", 00:22:16.692 "traddr": "10.0.0.1", 00:22:16.692 "trsvcid": "36318" 00:22:16.692 }, 00:22:16.692 "auth": { 00:22:16.692 "state": "completed", 00:22:16.692 "digest": "sha384", 00:22:16.692 "dhgroup": "ffdhe4096" 00:22:16.692 } 00:22:16.692 } 00:22:16.692 ]' 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.692 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.950 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:16.950 00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:17.519 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.519 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:17.519 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.519 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.519 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.519 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.519 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:17.519 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.779 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.039 00:22:18.039 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.039 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.039 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.299 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.299 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.299 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.299 00:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.299 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.299 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.299 { 00:22:18.299 "cntlid": 77, 00:22:18.299 "qid": 0, 00:22:18.299 "state": "enabled", 00:22:18.299 "thread": "nvmf_tgt_poll_group_000", 00:22:18.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:18.299 "listen_address": { 00:22:18.299 "trtype": "TCP", 00:22:18.299 "adrfam": "IPv4", 00:22:18.299 "traddr": "10.0.0.2", 00:22:18.299 "trsvcid": "4420" 00:22:18.299 }, 00:22:18.299 "peer_address": { 00:22:18.299 "trtype": "TCP", 00:22:18.299 "adrfam": "IPv4", 00:22:18.299 "traddr": "10.0.0.1", 00:22:18.299 "trsvcid": "36332" 00:22:18.299 }, 00:22:18.299 "auth": { 00:22:18.299 "state": "completed", 00:22:18.299 "digest": "sha384", 00:22:18.299 "dhgroup": "ffdhe4096" 00:22:18.299 } 00:22:18.299 } 00:22:18.299 ]' 00:22:18.299 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.299 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.299 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.299 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:18.299 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.299 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.299 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.299 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.558 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:18.558 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:19.127 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.127 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:19.127 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.127 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.127 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.127 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.127 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:19.127 00:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.386 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.645 00:22:19.645 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.645 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.645 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.904 { 00:22:19.904 "cntlid": 79, 00:22:19.904 "qid": 0, 00:22:19.904 "state": "enabled", 00:22:19.904 "thread": "nvmf_tgt_poll_group_000", 00:22:19.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:19.904 "listen_address": { 00:22:19.904 "trtype": "TCP", 00:22:19.904 "adrfam": "IPv4", 00:22:19.904 "traddr": "10.0.0.2", 00:22:19.904 "trsvcid": "4420" 00:22:19.904 }, 00:22:19.904 "peer_address": { 00:22:19.904 "trtype": "TCP", 00:22:19.904 "adrfam": "IPv4", 00:22:19.904 "traddr": "10.0.0.1", 00:22:19.904 "trsvcid": "36360" 00:22:19.904 }, 00:22:19.904 "auth": { 00:22:19.904 "state": "completed", 00:22:19.904 "digest": "sha384", 00:22:19.904 "dhgroup": "ffdhe4096" 00:22:19.904 } 00:22:19.904 } 00:22:19.904 ]' 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.904 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.163 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:20.163 00:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:20.730 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.730 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.730 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.730 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.730 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.730 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.730 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.730 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:20.730 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.990 00:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.249 00:22:21.249 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.249 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.249 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.511 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.511 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.511 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.511 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.511 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.511 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.511 { 00:22:21.511 "cntlid": 81, 00:22:21.511 "qid": 0, 00:22:21.511 "state": "enabled", 00:22:21.511 "thread": "nvmf_tgt_poll_group_000", 00:22:21.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:21.511 "listen_address": { 00:22:21.511 "trtype": "TCP", 00:22:21.511 "adrfam": "IPv4", 00:22:21.511 "traddr": "10.0.0.2", 00:22:21.511 "trsvcid": "4420" 00:22:21.511 }, 00:22:21.511 "peer_address": { 00:22:21.511 "trtype": "TCP", 00:22:21.511 "adrfam": "IPv4", 00:22:21.511 "traddr": "10.0.0.1", 00:22:21.511 "trsvcid": "36378" 00:22:21.511 }, 00:22:21.511 "auth": { 00:22:21.511 "state": "completed", 00:22:21.511 "digest": "sha384", 00:22:21.511 "dhgroup": "ffdhe6144" 00:22:21.511 } 00:22:21.511 } 00:22:21.511 ]' 00:22:21.511 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.511 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.511 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.511 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:21.511 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.772 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.772 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.772 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.772 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:21.772 00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:22.342 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.342 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:22.342 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.342 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.342 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.342 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.342 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:22.342 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.602 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.861 00:22:23.121 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.121 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.121 00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.121 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.121 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.121 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.121 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.121 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.121 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.121 { 00:22:23.121 "cntlid": 83, 00:22:23.121 "qid": 0, 00:22:23.121 "state": "enabled", 00:22:23.121 "thread": "nvmf_tgt_poll_group_000", 00:22:23.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:23.121 "listen_address": { 00:22:23.121 "trtype": "TCP", 00:22:23.121 "adrfam": "IPv4", 00:22:23.121 "traddr": "10.0.0.2", 00:22:23.121 "trsvcid": "4420" 00:22:23.121 }, 00:22:23.121 "peer_address": { 00:22:23.121 "trtype": "TCP", 00:22:23.121 "adrfam": "IPv4", 00:22:23.121 "traddr": "10.0.0.1", 00:22:23.121 "trsvcid": "36404" 00:22:23.121 }, 00:22:23.121 "auth": { 00:22:23.121 "state": "completed", 00:22:23.121 "digest": "sha384", 00:22:23.121 "dhgroup": "ffdhe6144" 00:22:23.121 } 00:22:23.121 } 00:22:23.121 ]' 00:22:23.122 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.384 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.384 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.384 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:23.384 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.384 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.384 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.384 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.643 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:23.643 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:24.211 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.211 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:24.211 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.211 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.211 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.211 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.211 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:24.211 00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.211 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.781 00:22:24.781 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.781 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.781 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.781 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.781 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.781 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.781 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.040 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.040 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.040 { 00:22:25.040 "cntlid": 85, 00:22:25.040 "qid": 0, 00:22:25.040 "state": "enabled", 00:22:25.040 "thread": "nvmf_tgt_poll_group_000", 00:22:25.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:25.040 "listen_address": { 00:22:25.040 "trtype": "TCP", 00:22:25.040 "adrfam": "IPv4", 00:22:25.040 "traddr": "10.0.0.2", 00:22:25.040 "trsvcid": "4420" 00:22:25.040 }, 00:22:25.040 "peer_address": { 00:22:25.040 "trtype": "TCP", 00:22:25.040 "adrfam": "IPv4", 00:22:25.040 "traddr": "10.0.0.1", 00:22:25.040 "trsvcid": "37468" 00:22:25.040 }, 00:22:25.040 "auth": { 00:22:25.040 "state": "completed", 00:22:25.040 "digest": "sha384", 00:22:25.040 "dhgroup": "ffdhe6144" 00:22:25.040 } 00:22:25.040 } 00:22:25.040 ]' 00:22:25.040 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.040 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:25.040 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.040 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:25.040 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.040 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.040 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.040 00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.299 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:25.299 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:25.869 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.869 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:25.869 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.869 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.869 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.869 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.869 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:25.869 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.129 00:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.389 00:22:26.389 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.389 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.389 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.648 { 00:22:26.648 "cntlid": 87, 00:22:26.648 "qid": 0, 00:22:26.648 "state": "enabled", 00:22:26.648 "thread": "nvmf_tgt_poll_group_000", 00:22:26.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:26.648 "listen_address": { 00:22:26.648 "trtype": "TCP", 00:22:26.648 "adrfam": "IPv4", 00:22:26.648 "traddr": "10.0.0.2", 00:22:26.648 "trsvcid": "4420" 00:22:26.648 }, 00:22:26.648 "peer_address": { 00:22:26.648 "trtype": "TCP", 00:22:26.648 "adrfam": "IPv4", 00:22:26.648 "traddr": "10.0.0.1", 00:22:26.648 "trsvcid": "37494" 00:22:26.648 }, 00:22:26.648 "auth": { 00:22:26.648 "state": "completed", 00:22:26.648 "digest": "sha384", 00:22:26.648 "dhgroup": "ffdhe6144" 00:22:26.648 } 00:22:26.648 } 00:22:26.648 ]' 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.648 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.907 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:26.907 00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:27.479 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.480 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:27.480 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.480 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.480 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.480 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:27.480 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.480 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:27.480 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:27.739 00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.308 00:22:28.308 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.308 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.308 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.567 { 00:22:28.567 "cntlid": 89, 00:22:28.567 "qid": 0, 00:22:28.567 "state": "enabled", 00:22:28.567 "thread": "nvmf_tgt_poll_group_000", 00:22:28.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:28.567 "listen_address": { 00:22:28.567 "trtype": "TCP", 00:22:28.567 "adrfam": "IPv4", 00:22:28.567 "traddr": "10.0.0.2", 00:22:28.567 "trsvcid": "4420" 00:22:28.567 }, 00:22:28.567 "peer_address": { 00:22:28.567 "trtype": "TCP", 00:22:28.567 "adrfam": "IPv4", 00:22:28.567 "traddr": "10.0.0.1", 00:22:28.567 "trsvcid": "37520" 00:22:28.567 }, 00:22:28.567 "auth": { 00:22:28.567 "state": "completed", 00:22:28.567 "digest": "sha384", 00:22:28.567 "dhgroup": "ffdhe8192" 00:22:28.567 } 00:22:28.567 } 00:22:28.567 ]' 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.567 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.826 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:28.826 00:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:29.396 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.396 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.396 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.396 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.396 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.396 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.396 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:29.396 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:29.655 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.231 00:22:30.231 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.232 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.232 00:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.232 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.232 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.232 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.232 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.232 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.232 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.232 { 00:22:30.232 "cntlid": 91, 00:22:30.232 "qid": 0, 00:22:30.232 "state": "enabled", 00:22:30.232 "thread": "nvmf_tgt_poll_group_000", 00:22:30.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:30.232 "listen_address": { 00:22:30.232 "trtype": "TCP", 00:22:30.232 "adrfam": "IPv4", 00:22:30.232 "traddr": "10.0.0.2", 00:22:30.232 "trsvcid": "4420" 00:22:30.232 }, 00:22:30.232 "peer_address": { 00:22:30.232 "trtype": "TCP", 00:22:30.232 "adrfam": "IPv4", 00:22:30.232 "traddr": "10.0.0.1", 00:22:30.232 "trsvcid": "37548" 00:22:30.232 }, 00:22:30.232 "auth": { 00:22:30.232 "state": "completed", 00:22:30.232 "digest": "sha384", 00:22:30.232 "dhgroup": "ffdhe8192" 00:22:30.232 } 00:22:30.232 } 00:22:30.232 ]' 00:22:30.232 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.490 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:30.490 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.490 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:30.490 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.490 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.490 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.490 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.749 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:30.749 00:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.318 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.577 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.577 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.577 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.836 00:22:31.836 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.836 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.836 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.095 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.095 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.095 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.095 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.095 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.095 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.095 { 00:22:32.095 "cntlid": 93, 00:22:32.095 "qid": 0, 00:22:32.095 "state": "enabled", 00:22:32.095 "thread": "nvmf_tgt_poll_group_000", 00:22:32.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:32.095 "listen_address": { 00:22:32.095 "trtype": "TCP", 00:22:32.095 "adrfam": "IPv4", 00:22:32.095 "traddr": "10.0.0.2", 00:22:32.095 "trsvcid": "4420" 00:22:32.095 }, 00:22:32.095 "peer_address": { 00:22:32.095 "trtype": "TCP", 00:22:32.095 "adrfam": "IPv4", 00:22:32.095 "traddr": "10.0.0.1", 00:22:32.095 "trsvcid": "37572" 00:22:32.095 }, 00:22:32.095 "auth": { 00:22:32.095 "state": "completed", 00:22:32.095 "digest": "sha384", 00:22:32.095 "dhgroup": "ffdhe8192" 00:22:32.095 } 00:22:32.095 } 00:22:32.095 ]' 00:22:32.095 00:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.095 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:32.095 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.354 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.354 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.354 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.354 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.354 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.618 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:32.618 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:33.201 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.201 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:33.201 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.201 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.201 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.201 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.201 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:33.201 00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:33.201 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.202 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.770 00:22:33.770 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.770 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.770 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.029 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.029 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.029 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.029 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.029 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.029 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.029 { 00:22:34.029 "cntlid": 95, 00:22:34.029 "qid": 0, 00:22:34.029 "state": "enabled", 00:22:34.029 "thread": "nvmf_tgt_poll_group_000", 00:22:34.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:34.029 "listen_address": { 00:22:34.029 "trtype": "TCP", 00:22:34.029 "adrfam": "IPv4", 00:22:34.029 "traddr": "10.0.0.2", 00:22:34.029 "trsvcid": "4420" 00:22:34.029 }, 00:22:34.029 "peer_address": { 00:22:34.029 "trtype": "TCP", 00:22:34.029 "adrfam": "IPv4", 00:22:34.029 "traddr": "10.0.0.1", 00:22:34.029 "trsvcid": "37590" 00:22:34.029 }, 00:22:34.029 "auth": { 00:22:34.029 "state": "completed", 00:22:34.029 "digest": "sha384", 00:22:34.029 "dhgroup": "ffdhe8192" 00:22:34.029 } 00:22:34.029 } 00:22:34.029 ]' 00:22:34.029 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.029 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.029 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.029 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:34.029 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.030 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.030 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.030 00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.289 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:34.289 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:34.858 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.858 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.858 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.858 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.858 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.858 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:34.858 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.858 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.858 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:34.858 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.120 00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.380 00:22:35.380 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.380 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.380 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.639 { 00:22:35.639 "cntlid": 97, 00:22:35.639 "qid": 0, 00:22:35.639 "state": "enabled", 00:22:35.639 "thread": "nvmf_tgt_poll_group_000", 00:22:35.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:35.639 "listen_address": { 00:22:35.639 "trtype": "TCP", 00:22:35.639 "adrfam": "IPv4", 00:22:35.639 "traddr": "10.0.0.2", 00:22:35.639 "trsvcid": "4420" 00:22:35.639 }, 00:22:35.639 "peer_address": { 00:22:35.639 "trtype": "TCP", 00:22:35.639 "adrfam": "IPv4", 00:22:35.639 "traddr": "10.0.0.1", 00:22:35.639 "trsvcid": "56198" 00:22:35.639 }, 00:22:35.639 "auth": { 00:22:35.639 "state": "completed", 00:22:35.639 "digest": "sha512", 00:22:35.639 "dhgroup": "null" 00:22:35.639 } 00:22:35.639 } 00:22:35.639 ]' 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.639 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.900 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:35.900 00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:36.472 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.472 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:36.472 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.472 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.472 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.472 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.472 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:36.473 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.732 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.991 00:22:36.992 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.992 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.992 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.251 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.251 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.251 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.251 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.251 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.251 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.251 { 00:22:37.251 "cntlid": 99, 00:22:37.251 "qid": 0, 00:22:37.251 "state": "enabled", 00:22:37.251 "thread": "nvmf_tgt_poll_group_000", 00:22:37.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:37.251 "listen_address": { 00:22:37.251 "trtype": "TCP", 00:22:37.251 "adrfam": "IPv4", 00:22:37.251 "traddr": "10.0.0.2", 00:22:37.251 "trsvcid": "4420" 00:22:37.251 }, 00:22:37.251 "peer_address": { 00:22:37.251 "trtype": "TCP", 00:22:37.251 "adrfam": "IPv4", 00:22:37.251 "traddr": "10.0.0.1", 00:22:37.251 "trsvcid": "56222" 00:22:37.251 }, 00:22:37.251 "auth": { 00:22:37.251 "state": "completed", 00:22:37.251 "digest": "sha512", 00:22:37.251 "dhgroup": "null" 00:22:37.251 } 00:22:37.251 } 00:22:37.251 ]' 00:22:37.251 00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.251 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.251 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.251 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:37.251 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.251 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.251 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.251 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.513 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:37.513 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:38.082 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.082 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:38.082 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.082 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.082 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.082 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.082 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:38.082 00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.344 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.603 00:22:38.603 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.603 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.603 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.863 { 00:22:38.863 "cntlid": 101, 00:22:38.863 "qid": 0, 00:22:38.863 "state": "enabled", 00:22:38.863 "thread": "nvmf_tgt_poll_group_000", 00:22:38.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:38.863 "listen_address": { 00:22:38.863 "trtype": "TCP", 00:22:38.863 "adrfam": "IPv4", 00:22:38.863 "traddr": "10.0.0.2", 00:22:38.863 "trsvcid": "4420" 00:22:38.863 }, 00:22:38.863 "peer_address": { 00:22:38.863 "trtype": "TCP", 00:22:38.863 "adrfam": "IPv4", 00:22:38.863 "traddr": "10.0.0.1", 00:22:38.863 "trsvcid": "56250" 00:22:38.863 }, 00:22:38.863 "auth": { 00:22:38.863 "state": "completed", 00:22:38.863 "digest": "sha512", 00:22:38.863 "dhgroup": "null" 00:22:38.863 } 00:22:38.863 } 00:22:38.863 ]' 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.863 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.127 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:39.127 00:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:39.696 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.696 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:39.696 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.696 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.696 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.696 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.696 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:39.696 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.957 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.217 00:22:40.217 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.217 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.217 00:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.217 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.217 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.217 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.217 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.217 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.217 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.217 { 00:22:40.217 "cntlid": 103, 00:22:40.217 "qid": 0, 00:22:40.217 "state": "enabled", 00:22:40.217 "thread": "nvmf_tgt_poll_group_000", 00:22:40.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:40.217 "listen_address": { 00:22:40.217 "trtype": "TCP", 00:22:40.217 "adrfam": "IPv4", 00:22:40.217 "traddr": "10.0.0.2", 00:22:40.217 "trsvcid": "4420" 00:22:40.217 }, 00:22:40.217 "peer_address": { 00:22:40.217 "trtype": "TCP", 00:22:40.217 "adrfam": "IPv4", 00:22:40.217 "traddr": "10.0.0.1", 00:22:40.217 "trsvcid": "56272" 00:22:40.217 }, 00:22:40.217 "auth": { 00:22:40.217 "state": "completed", 00:22:40.217 "digest": "sha512", 00:22:40.217 "dhgroup": "null" 00:22:40.217 } 00:22:40.217 } 00:22:40.217 ]' 00:22:40.217 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.476 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.476 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.476 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:40.476 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.476 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.476 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.476 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.734 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:40.734 00:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:41.303 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.303 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:41.303 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.303 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.303 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.303 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.303 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.303 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.303 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.563 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.821 00:22:41.821 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.821 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.821 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.821 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.821 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.821 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.821 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.821 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.821 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.821 { 00:22:41.821 "cntlid": 105, 00:22:41.821 "qid": 0, 00:22:41.821 "state": "enabled", 00:22:41.821 "thread": "nvmf_tgt_poll_group_000", 00:22:41.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:41.821 "listen_address": { 00:22:41.821 "trtype": "TCP", 00:22:41.821 "adrfam": "IPv4", 00:22:41.821 "traddr": "10.0.0.2", 00:22:41.821 "trsvcid": "4420" 00:22:41.821 }, 00:22:41.821 "peer_address": { 00:22:41.821 "trtype": "TCP", 00:22:41.821 "adrfam": "IPv4", 00:22:41.821 "traddr": "10.0.0.1", 00:22:41.821 "trsvcid": "56298" 00:22:41.821 }, 00:22:41.821 "auth": { 00:22:41.821 "state": "completed", 00:22:41.821 "digest": "sha512", 00:22:41.821 "dhgroup": "ffdhe2048" 00:22:41.821 } 00:22:41.821 } 00:22:41.821 ]' 00:22:41.821 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.078 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.078 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.078 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:42.078 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.078 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.078 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.078 00:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.338 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:42.338 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.906 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.164 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.164 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.164 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.164 00:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.164 00:22:43.164 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.164 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.422 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.423 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.423 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.423 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.423 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.423 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.423 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.423 { 00:22:43.423 "cntlid": 107, 00:22:43.423 "qid": 0, 00:22:43.423 "state": "enabled", 00:22:43.423 "thread": "nvmf_tgt_poll_group_000", 00:22:43.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:43.423 "listen_address": { 00:22:43.423 "trtype": "TCP", 00:22:43.423 "adrfam": "IPv4", 00:22:43.423 "traddr": "10.0.0.2", 00:22:43.423 "trsvcid": "4420" 00:22:43.423 }, 00:22:43.423 "peer_address": { 00:22:43.423 "trtype": "TCP", 00:22:43.423 "adrfam": "IPv4", 00:22:43.423 "traddr": "10.0.0.1", 00:22:43.423 "trsvcid": "56326" 00:22:43.423 }, 00:22:43.423 "auth": { 00:22:43.423 "state": "completed", 00:22:43.423 "digest": "sha512", 00:22:43.423 "dhgroup": "ffdhe2048" 00:22:43.423 } 00:22:43.423 } 00:22:43.423 ]' 00:22:43.423 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.423 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.423 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.682 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:43.682 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.682 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.682 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.682 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.941 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:43.941 00:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.509 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.510 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.510 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.510 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.510 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.510 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.769 00:22:44.769 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.769 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.769 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.027 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.027 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.027 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.027 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.027 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.027 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.027 { 00:22:45.027 "cntlid": 109, 00:22:45.027 "qid": 0, 00:22:45.027 "state": "enabled", 00:22:45.027 "thread": "nvmf_tgt_poll_group_000", 00:22:45.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:45.027 "listen_address": { 00:22:45.027 "trtype": "TCP", 00:22:45.027 "adrfam": "IPv4", 00:22:45.027 "traddr": "10.0.0.2", 00:22:45.027 "trsvcid": "4420" 00:22:45.027 }, 00:22:45.027 "peer_address": { 00:22:45.027 "trtype": "TCP", 00:22:45.027 "adrfam": "IPv4", 00:22:45.027 "traddr": "10.0.0.1", 00:22:45.027 "trsvcid": "39734" 00:22:45.027 }, 00:22:45.027 "auth": { 00:22:45.027 "state": "completed", 00:22:45.027 "digest": "sha512", 00:22:45.027 "dhgroup": "ffdhe2048" 00:22:45.027 } 00:22:45.027 } 00:22:45.027 ]' 00:22:45.027 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.027 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.027 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.292 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:45.292 00:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.292 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.292 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.292 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.292 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:45.292 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:45.863 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.124 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:46.124 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.124 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.124 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.124 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.124 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:46.124 00:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.124 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.382 00:22:46.382 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.383 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.383 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.642 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.642 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.642 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.642 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.642 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.642 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.642 { 00:22:46.642 "cntlid": 111, 00:22:46.642 "qid": 0, 00:22:46.642 "state": "enabled", 00:22:46.642 "thread": "nvmf_tgt_poll_group_000", 00:22:46.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:46.642 "listen_address": { 00:22:46.642 "trtype": "TCP", 00:22:46.642 "adrfam": "IPv4", 00:22:46.642 "traddr": "10.0.0.2", 00:22:46.642 "trsvcid": "4420" 00:22:46.642 }, 00:22:46.642 "peer_address": { 00:22:46.642 "trtype": "TCP", 00:22:46.642 "adrfam": "IPv4", 00:22:46.642 "traddr": "10.0.0.1", 00:22:46.642 "trsvcid": "39768" 00:22:46.642 }, 00:22:46.642 "auth": { 00:22:46.642 "state": "completed", 00:22:46.642 "digest": "sha512", 00:22:46.642 "dhgroup": "ffdhe2048" 00:22:46.642 } 00:22:46.642 } 00:22:46.642 ]' 00:22:46.642 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.642 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.642 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.901 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:46.901 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.901 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.901 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.901 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.160 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:47.160 00:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.726 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:47.984 00:22:48.243 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.243 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.243 00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.243 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.243 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.243 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.243 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.243 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.243 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.243 { 00:22:48.243 "cntlid": 113, 00:22:48.243 "qid": 0, 00:22:48.243 "state": "enabled", 00:22:48.243 "thread": "nvmf_tgt_poll_group_000", 00:22:48.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:48.243 "listen_address": { 00:22:48.243 "trtype": "TCP", 00:22:48.243 "adrfam": "IPv4", 00:22:48.243 "traddr": "10.0.0.2", 00:22:48.243 "trsvcid": "4420" 00:22:48.243 }, 00:22:48.243 "peer_address": { 00:22:48.243 "trtype": "TCP", 00:22:48.243 "adrfam": "IPv4", 00:22:48.243 "traddr": "10.0.0.1", 00:22:48.243 "trsvcid": "39798" 00:22:48.243 }, 00:22:48.243 "auth": { 00:22:48.243 "state": "completed", 00:22:48.243 "digest": "sha512", 00:22:48.243 "dhgroup": "ffdhe3072" 00:22:48.243 } 00:22:48.243 } 00:22:48.243 ]' 00:22:48.243 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.243 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.243 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.502 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:48.502 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.503 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.503 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.503 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.761 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:48.761 00:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:49.329 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.329 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:49.329 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.329 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.329 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.329 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.329 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:49.329 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:49.329 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:49.329 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.330 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:49.330 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:49.330 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:49.330 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.330 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.330 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.330 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.330 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.330 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.330 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.330 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.589 00:22:49.847 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.848 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.848 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.848 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.848 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.848 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.848 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.848 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.848 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.848 { 00:22:49.848 "cntlid": 115, 00:22:49.848 "qid": 0, 00:22:49.848 "state": "enabled", 00:22:49.848 "thread": "nvmf_tgt_poll_group_000", 00:22:49.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:49.848 "listen_address": { 00:22:49.848 "trtype": "TCP", 00:22:49.848 "adrfam": "IPv4", 00:22:49.848 "traddr": "10.0.0.2", 00:22:49.848 "trsvcid": "4420" 00:22:49.848 }, 00:22:49.848 "peer_address": { 00:22:49.848 "trtype": "TCP", 00:22:49.848 "adrfam": "IPv4", 00:22:49.848 "traddr": "10.0.0.1", 00:22:49.848 "trsvcid": "39816" 00:22:49.848 }, 00:22:49.848 "auth": { 00:22:49.848 "state": "completed", 00:22:49.848 "digest": "sha512", 00:22:49.848 "dhgroup": "ffdhe3072" 00:22:49.848 } 00:22:49.848 } 00:22:49.848 ]' 00:22:49.848 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.107 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.107 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.107 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:50.107 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.107 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.107 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.107 00:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.366 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:50.366 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:50.935 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.935 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.935 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.935 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.935 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.935 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.935 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:50.935 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:50.935 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:50.935 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:50.935 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:50.936 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:50.936 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:50.936 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.936 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:50.936 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.936 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.194 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.194 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.194 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.194 00:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.194 00:22:51.453 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.453 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.453 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.453 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.453 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.453 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.453 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.453 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.453 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.453 { 00:22:51.453 "cntlid": 117, 00:22:51.453 "qid": 0, 00:22:51.453 "state": "enabled", 00:22:51.453 "thread": "nvmf_tgt_poll_group_000", 00:22:51.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:51.453 "listen_address": { 00:22:51.453 "trtype": "TCP", 00:22:51.453 "adrfam": "IPv4", 00:22:51.453 "traddr": "10.0.0.2", 00:22:51.453 "trsvcid": "4420" 00:22:51.453 }, 00:22:51.453 "peer_address": { 00:22:51.453 "trtype": "TCP", 00:22:51.453 "adrfam": "IPv4", 00:22:51.453 "traddr": "10.0.0.1", 00:22:51.453 "trsvcid": "39844" 00:22:51.453 }, 00:22:51.453 "auth": { 00:22:51.453 "state": "completed", 00:22:51.453 "digest": "sha512", 00:22:51.453 "dhgroup": "ffdhe3072" 00:22:51.453 } 00:22:51.453 } 00:22:51.453 ]' 00:22:51.453 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.712 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.712 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.712 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:51.712 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.712 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.712 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.712 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.970 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:51.970 00:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.538 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:52.797 00:22:53.056 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.056 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.056 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.056 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.056 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.056 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.056 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.056 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.056 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.056 { 00:22:53.056 "cntlid": 119, 00:22:53.056 "qid": 0, 00:22:53.056 "state": "enabled", 00:22:53.056 "thread": "nvmf_tgt_poll_group_000", 00:22:53.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:53.056 "listen_address": { 00:22:53.056 "trtype": "TCP", 00:22:53.056 "adrfam": "IPv4", 00:22:53.056 "traddr": "10.0.0.2", 00:22:53.056 "trsvcid": "4420" 00:22:53.056 }, 00:22:53.056 "peer_address": { 00:22:53.056 "trtype": "TCP", 00:22:53.056 "adrfam": "IPv4", 00:22:53.056 "traddr": "10.0.0.1", 00:22:53.056 "trsvcid": "39880" 00:22:53.056 }, 00:22:53.056 "auth": { 00:22:53.056 "state": "completed", 00:22:53.056 "digest": "sha512", 00:22:53.056 "dhgroup": "ffdhe3072" 00:22:53.056 } 00:22:53.056 } 00:22:53.056 ]' 00:22:53.056 00:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.314 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.314 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.314 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:53.314 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.314 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.314 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.314 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.572 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:53.572 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:54.139 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.139 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:54.139 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.139 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.139 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.139 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.139 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.139 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:54.139 00:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.398 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.657 00:22:54.657 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:54.657 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:54.657 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.916 { 00:22:54.916 "cntlid": 121, 00:22:54.916 "qid": 0, 00:22:54.916 "state": "enabled", 00:22:54.916 "thread": "nvmf_tgt_poll_group_000", 00:22:54.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:54.916 "listen_address": { 00:22:54.916 "trtype": "TCP", 00:22:54.916 "adrfam": "IPv4", 00:22:54.916 "traddr": "10.0.0.2", 00:22:54.916 "trsvcid": "4420" 00:22:54.916 }, 00:22:54.916 "peer_address": { 00:22:54.916 "trtype": "TCP", 00:22:54.916 "adrfam": "IPv4", 00:22:54.916 "traddr": "10.0.0.1", 00:22:54.916 "trsvcid": "58758" 00:22:54.916 }, 00:22:54.916 "auth": { 00:22:54.916 "state": "completed", 00:22:54.916 "digest": "sha512", 00:22:54.916 "dhgroup": "ffdhe4096" 00:22:54.916 } 00:22:54.916 } 00:22:54.916 ]' 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.916 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.917 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.917 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.175 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:55.175 00:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:22:55.743 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.743 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.743 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.743 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.743 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.743 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.743 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:55.743 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:56.001 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:56.001 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.001 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:56.001 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:56.001 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:56.001 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.002 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.002 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.002 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.002 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.002 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.002 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.002 00:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.261 00:22:56.261 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.261 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.261 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.518 { 00:22:56.518 "cntlid": 123, 00:22:56.518 "qid": 0, 00:22:56.518 "state": "enabled", 00:22:56.518 "thread": "nvmf_tgt_poll_group_000", 00:22:56.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:56.518 "listen_address": { 00:22:56.518 "trtype": "TCP", 00:22:56.518 "adrfam": "IPv4", 00:22:56.518 "traddr": "10.0.0.2", 00:22:56.518 "trsvcid": "4420" 00:22:56.518 }, 00:22:56.518 "peer_address": { 00:22:56.518 "trtype": "TCP", 00:22:56.518 "adrfam": "IPv4", 00:22:56.518 "traddr": "10.0.0.1", 00:22:56.518 "trsvcid": "58780" 00:22:56.518 }, 00:22:56.518 "auth": { 00:22:56.518 "state": "completed", 00:22:56.518 "digest": "sha512", 00:22:56.518 "dhgroup": "ffdhe4096" 00:22:56.518 } 00:22:56.518 } 00:22:56.518 ]' 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.518 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.777 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:56.777 00:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:22:57.344 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.344 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:57.344 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.344 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.344 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.344 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.344 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:57.344 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.603 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.863 00:22:57.863 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.863 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.863 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.122 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.122 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.122 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.122 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.122 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.122 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.122 { 00:22:58.122 "cntlid": 125, 00:22:58.122 "qid": 0, 00:22:58.122 "state": "enabled", 00:22:58.122 "thread": "nvmf_tgt_poll_group_000", 00:22:58.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:58.122 "listen_address": { 00:22:58.122 "trtype": "TCP", 00:22:58.122 "adrfam": "IPv4", 00:22:58.122 "traddr": "10.0.0.2", 00:22:58.122 "trsvcid": "4420" 00:22:58.122 }, 00:22:58.122 "peer_address": { 00:22:58.122 "trtype": "TCP", 00:22:58.122 "adrfam": "IPv4", 00:22:58.122 "traddr": "10.0.0.1", 00:22:58.122 "trsvcid": "58810" 00:22:58.122 }, 00:22:58.122 "auth": { 00:22:58.122 "state": "completed", 00:22:58.122 "digest": "sha512", 00:22:58.122 "dhgroup": "ffdhe4096" 00:22:58.122 } 00:22:58.122 } 00:22:58.122 ]' 00:22:58.122 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:58.122 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.122 00:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:58.122 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:58.122 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.381 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.381 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.381 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.381 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:58.381 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:22:58.949 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.949 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:58.949 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.949 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.949 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.949 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.949 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:58.949 00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.208 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.467 00:22:59.467 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.467 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.467 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.724 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.724 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.724 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.724 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.724 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.724 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.724 { 00:22:59.724 "cntlid": 127, 00:22:59.724 "qid": 0, 00:22:59.724 "state": "enabled", 00:22:59.724 "thread": "nvmf_tgt_poll_group_000", 00:22:59.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:22:59.724 "listen_address": { 00:22:59.724 "trtype": "TCP", 00:22:59.724 "adrfam": "IPv4", 00:22:59.724 "traddr": "10.0.0.2", 00:22:59.724 "trsvcid": "4420" 00:22:59.724 }, 00:22:59.724 "peer_address": { 00:22:59.724 "trtype": "TCP", 00:22:59.724 "adrfam": "IPv4", 00:22:59.725 "traddr": "10.0.0.1", 00:22:59.725 "trsvcid": "58850" 00:22:59.725 }, 00:22:59.725 "auth": { 00:22:59.725 "state": "completed", 00:22:59.725 "digest": "sha512", 00:22:59.725 "dhgroup": "ffdhe4096" 00:22:59.725 } 00:22:59.725 } 00:22:59.725 ]' 00:22:59.725 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.725 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.725 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.725 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:59.725 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.983 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.983 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.983 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.983 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:22:59.983 00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:23:00.551 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.551 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:00.551 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.551 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.551 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.551 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.551 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:00.551 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:00.551 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.825 00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.083 00:23:01.342 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.342 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.342 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.342 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.342 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.342 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.342 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.342 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.342 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.342 { 00:23:01.342 "cntlid": 129, 00:23:01.342 "qid": 0, 00:23:01.342 "state": "enabled", 00:23:01.342 "thread": "nvmf_tgt_poll_group_000", 00:23:01.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:01.342 "listen_address": { 00:23:01.342 "trtype": "TCP", 00:23:01.342 "adrfam": "IPv4", 00:23:01.342 "traddr": "10.0.0.2", 00:23:01.342 "trsvcid": "4420" 00:23:01.342 }, 00:23:01.342 "peer_address": { 00:23:01.342 "trtype": "TCP", 00:23:01.342 "adrfam": "IPv4", 00:23:01.342 "traddr": "10.0.0.1", 00:23:01.342 "trsvcid": "58884" 00:23:01.342 }, 00:23:01.342 "auth": { 00:23:01.342 "state": "completed", 00:23:01.342 "digest": "sha512", 00:23:01.342 "dhgroup": "ffdhe6144" 00:23:01.342 } 00:23:01.342 } 00:23:01.342 ]' 00:23:01.342 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:01.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:01.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.600 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.859 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:23:01.859 00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:23:02.426 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.426 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:02.426 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.426 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.426 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.426 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.426 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:02.426 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:02.684 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.685 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.943 00:23:02.944 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.944 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.944 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.202 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.202 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.202 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.202 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.202 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.202 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.202 { 00:23:03.202 "cntlid": 131, 00:23:03.202 "qid": 0, 00:23:03.202 "state": "enabled", 00:23:03.202 "thread": "nvmf_tgt_poll_group_000", 00:23:03.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:03.202 "listen_address": { 00:23:03.202 "trtype": "TCP", 00:23:03.202 "adrfam": "IPv4", 00:23:03.202 "traddr": "10.0.0.2", 00:23:03.202 "trsvcid": "4420" 00:23:03.202 }, 00:23:03.202 "peer_address": { 00:23:03.202 "trtype": "TCP", 00:23:03.202 "adrfam": "IPv4", 00:23:03.202 "traddr": "10.0.0.1", 00:23:03.202 "trsvcid": "58894" 00:23:03.202 }, 00:23:03.202 "auth": { 00:23:03.202 "state": "completed", 00:23:03.202 "digest": "sha512", 00:23:03.202 "dhgroup": "ffdhe6144" 00:23:03.202 } 00:23:03.202 } 00:23:03.202 ]' 00:23:03.202 00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.202 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:03.203 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.203 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:03.203 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.203 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.203 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.203 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.461 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:23:03.461 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:23:04.029 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.029 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:04.029 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.029 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.029 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.029 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.029 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:04.029 00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:04.288 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:04.288 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.288 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:04.288 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:04.288 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:04.288 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.289 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.289 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.289 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.289 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.289 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.289 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.289 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.548 00:23:04.807 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.807 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.807 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.807 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.807 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.807 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.807 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.807 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.807 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.807 { 00:23:04.807 "cntlid": 133, 00:23:04.807 "qid": 0, 00:23:04.807 "state": "enabled", 00:23:04.807 "thread": "nvmf_tgt_poll_group_000", 00:23:04.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:04.807 "listen_address": { 00:23:04.807 "trtype": "TCP", 00:23:04.807 "adrfam": "IPv4", 00:23:04.807 "traddr": "10.0.0.2", 00:23:04.807 "trsvcid": "4420" 00:23:04.807 }, 00:23:04.807 "peer_address": { 00:23:04.807 "trtype": "TCP", 00:23:04.807 "adrfam": "IPv4", 00:23:04.807 "traddr": "10.0.0.1", 00:23:04.807 "trsvcid": "47442" 00:23:04.807 }, 00:23:04.807 "auth": { 00:23:04.807 "state": "completed", 00:23:04.807 "digest": "sha512", 00:23:04.807 "dhgroup": "ffdhe6144" 00:23:04.807 } 00:23:04.807 } 00:23:04.807 ]' 00:23:04.807 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.069 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.069 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.069 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:05.069 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.069 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.069 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.069 00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.330 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:23:05.330 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:23:05.900 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.900 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:05.900 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.900 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.900 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.900 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:05.900 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:05.900 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:05.900 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:05.900 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.900 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:05.901 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:05.901 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:05.901 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.901 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:05.901 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.901 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.901 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.901 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:05.901 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:05.901 00:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:06.468 00:23:06.468 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:06.468 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:06.468 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.468 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.468 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.468 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.468 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.468 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.468 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.468 { 00:23:06.468 "cntlid": 135, 00:23:06.468 "qid": 0, 00:23:06.468 "state": "enabled", 00:23:06.468 "thread": "nvmf_tgt_poll_group_000", 00:23:06.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:06.468 "listen_address": { 00:23:06.468 "trtype": "TCP", 00:23:06.468 "adrfam": "IPv4", 00:23:06.468 "traddr": "10.0.0.2", 00:23:06.468 "trsvcid": "4420" 00:23:06.468 }, 00:23:06.468 "peer_address": { 00:23:06.468 "trtype": "TCP", 00:23:06.468 "adrfam": "IPv4", 00:23:06.468 "traddr": "10.0.0.1", 00:23:06.468 "trsvcid": "47460" 00:23:06.468 }, 00:23:06.468 "auth": { 00:23:06.468 "state": "completed", 00:23:06.468 "digest": "sha512", 00:23:06.468 "dhgroup": "ffdhe6144" 00:23:06.468 } 00:23:06.468 } 00:23:06.468 ]' 00:23:06.468 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.726 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.727 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.727 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:06.727 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.727 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.727 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.727 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.985 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:23:06.985 00:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:23:07.552 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.552 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:07.552 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.552 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.552 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.552 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.552 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:07.552 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:07.552 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.811 00:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:08.073 00:23:08.073 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:08.073 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:08.073 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.334 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.334 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.334 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.334 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.334 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.334 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:08.334 { 00:23:08.334 "cntlid": 137, 00:23:08.334 "qid": 0, 00:23:08.334 "state": "enabled", 00:23:08.334 "thread": "nvmf_tgt_poll_group_000", 00:23:08.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:08.334 "listen_address": { 00:23:08.334 "trtype": "TCP", 00:23:08.334 "adrfam": "IPv4", 00:23:08.334 "traddr": "10.0.0.2", 00:23:08.334 "trsvcid": "4420" 00:23:08.334 }, 00:23:08.334 "peer_address": { 00:23:08.334 "trtype": "TCP", 00:23:08.334 "adrfam": "IPv4", 00:23:08.334 "traddr": "10.0.0.1", 00:23:08.334 "trsvcid": "47494" 00:23:08.334 }, 00:23:08.334 "auth": { 00:23:08.334 "state": "completed", 00:23:08.334 "digest": "sha512", 00:23:08.334 "dhgroup": "ffdhe8192" 00:23:08.334 } 00:23:08.334 } 00:23:08.334 ]' 00:23:08.334 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:08.593 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:08.593 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:08.593 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:08.593 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.593 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.593 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.593 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.852 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:23:08.852 00:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.419 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:09.987 00:23:09.987 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.987 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.987 00:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.247 { 00:23:10.247 "cntlid": 139, 00:23:10.247 "qid": 0, 00:23:10.247 "state": "enabled", 00:23:10.247 "thread": "nvmf_tgt_poll_group_000", 00:23:10.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:10.247 "listen_address": { 00:23:10.247 "trtype": "TCP", 00:23:10.247 "adrfam": "IPv4", 00:23:10.247 "traddr": "10.0.0.2", 00:23:10.247 "trsvcid": "4420" 00:23:10.247 }, 00:23:10.247 "peer_address": { 00:23:10.247 "trtype": "TCP", 00:23:10.247 "adrfam": "IPv4", 00:23:10.247 "traddr": "10.0.0.1", 00:23:10.247 "trsvcid": "47518" 00:23:10.247 }, 00:23:10.247 "auth": { 00:23:10.247 "state": "completed", 00:23:10.247 "digest": "sha512", 00:23:10.247 "dhgroup": "ffdhe8192" 00:23:10.247 } 00:23:10.247 } 00:23:10.247 ]' 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.247 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.516 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:23:10.516 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: --dhchap-ctrl-secret DHHC-1:02:OTNmMzIyMTEwZmY0MzVkNDI5NzFjZTczMzE2ODhhYjEyYWViNWRjMzIyMmZjNTM1SjPwUw==: 00:23:11.083 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.083 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.083 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.083 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.083 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.083 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:11.083 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:11.083 00:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.349 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.916 00:23:11.916 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.916 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.916 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.175 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.175 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.175 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.175 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.175 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.175 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:12.175 { 00:23:12.175 "cntlid": 141, 00:23:12.175 "qid": 0, 00:23:12.175 "state": "enabled", 00:23:12.175 "thread": "nvmf_tgt_poll_group_000", 00:23:12.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:12.175 "listen_address": { 00:23:12.175 "trtype": "TCP", 00:23:12.175 "adrfam": "IPv4", 00:23:12.175 "traddr": "10.0.0.2", 00:23:12.175 "trsvcid": "4420" 00:23:12.175 }, 00:23:12.175 "peer_address": { 00:23:12.175 "trtype": "TCP", 00:23:12.175 "adrfam": "IPv4", 00:23:12.175 "traddr": "10.0.0.1", 00:23:12.175 "trsvcid": "47560" 00:23:12.175 }, 00:23:12.175 "auth": { 00:23:12.175 "state": "completed", 00:23:12.175 "digest": "sha512", 00:23:12.175 "dhgroup": "ffdhe8192" 00:23:12.175 } 00:23:12.175 } 00:23:12.175 ]' 00:23:12.175 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:12.175 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.175 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.175 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:12.175 00:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:12.175 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.175 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.175 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:12.434 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:23:12.434 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:01:MjQ4NDAyMWE1ZjYyNjVmNDU0YTMyNzFjMDRkNGYzNzezULBJ: 00:23:13.000 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.000 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:13.000 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.000 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.000 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.000 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.000 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:13.000 00:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.259 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.826 00:23:13.826 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.826 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.827 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.827 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.827 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.827 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.827 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.827 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.827 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:13.827 { 00:23:13.827 "cntlid": 143, 00:23:13.827 "qid": 0, 00:23:13.827 "state": "enabled", 00:23:13.827 "thread": "nvmf_tgt_poll_group_000", 00:23:13.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:13.827 "listen_address": { 00:23:13.827 "trtype": "TCP", 00:23:13.827 "adrfam": "IPv4", 00:23:13.827 "traddr": "10.0.0.2", 00:23:13.827 "trsvcid": "4420" 00:23:13.827 }, 00:23:13.827 "peer_address": { 00:23:13.827 "trtype": "TCP", 00:23:13.827 "adrfam": "IPv4", 00:23:13.827 "traddr": "10.0.0.1", 00:23:13.827 "trsvcid": "47604" 00:23:13.827 }, 00:23:13.827 "auth": { 00:23:13.827 "state": "completed", 00:23:13.827 "digest": "sha512", 00:23:13.827 "dhgroup": "ffdhe8192" 00:23:13.827 } 00:23:13.827 } 00:23:13.827 ]' 00:23:13.827 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:14.085 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.085 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:14.085 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:14.086 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:14.086 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.086 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.086 00:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.344 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:23:14.344 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:23:14.912 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.913 00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.481 00:23:15.481 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.481 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.481 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.739 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.740 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.740 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.740 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.740 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.740 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:15.740 { 00:23:15.740 "cntlid": 145, 00:23:15.740 "qid": 0, 00:23:15.740 "state": "enabled", 00:23:15.740 "thread": "nvmf_tgt_poll_group_000", 00:23:15.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:15.740 "listen_address": { 00:23:15.740 "trtype": "TCP", 00:23:15.740 "adrfam": "IPv4", 00:23:15.740 "traddr": "10.0.0.2", 00:23:15.740 "trsvcid": "4420" 00:23:15.740 }, 00:23:15.740 "peer_address": { 00:23:15.740 "trtype": "TCP", 00:23:15.740 "adrfam": "IPv4", 00:23:15.740 "traddr": "10.0.0.1", 00:23:15.740 "trsvcid": "56940" 00:23:15.740 }, 00:23:15.740 "auth": { 00:23:15.740 "state": "completed", 00:23:15.740 "digest": "sha512", 00:23:15.740 "dhgroup": "ffdhe8192" 00:23:15.740 } 00:23:15.740 } 00:23:15.740 ]' 00:23:15.740 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:15.740 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.740 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:15.740 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:15.740 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:15.998 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.998 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.998 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.998 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:23:15.998 00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTFjNzJjYzlmNDY2NDVkYjY3MDY3MDgwYmZjMTEyZjFjMmE5ZDMzZTY0MjQxNWQw29zpNA==: --dhchap-ctrl-secret DHHC-1:03:ZTM4ZDIyOThlZmUzZTIwMThmNGQ2OGQyZjkzYWNiYTIzMjRlNDQyMmQwNGM3Mjg4OTMwYTIxMjFmNTRjNjY0ZQc1of0=: 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:16.570 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:17.138 request: 00:23:17.138 { 00:23:17.138 "name": "nvme0", 00:23:17.138 "trtype": "tcp", 00:23:17.138 "traddr": "10.0.0.2", 00:23:17.138 "adrfam": "ipv4", 00:23:17.138 "trsvcid": "4420", 00:23:17.138 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:17.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:17.138 "prchk_reftag": false, 00:23:17.138 "prchk_guard": false, 00:23:17.138 "hdgst": false, 00:23:17.138 "ddgst": false, 00:23:17.138 "dhchap_key": "key2", 00:23:17.138 "allow_unrecognized_csi": false, 00:23:17.138 "method": "bdev_nvme_attach_controller", 00:23:17.138 "req_id": 1 00:23:17.138 } 00:23:17.138 Got JSON-RPC error response 00:23:17.138 response: 00:23:17.138 { 00:23:17.138 "code": -5, 00:23:17.138 "message": "Input/output error" 00:23:17.138 } 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:17.139 00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:17.707 request: 00:23:17.707 { 00:23:17.707 "name": "nvme0", 00:23:17.707 "trtype": "tcp", 00:23:17.707 "traddr": "10.0.0.2", 00:23:17.707 "adrfam": "ipv4", 00:23:17.707 "trsvcid": "4420", 00:23:17.707 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:17.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:17.707 "prchk_reftag": false, 00:23:17.707 "prchk_guard": false, 00:23:17.707 "hdgst": false, 00:23:17.707 "ddgst": false, 00:23:17.707 "dhchap_key": "key1", 00:23:17.707 "dhchap_ctrlr_key": "ckey2", 00:23:17.707 "allow_unrecognized_csi": false, 00:23:17.707 "method": "bdev_nvme_attach_controller", 00:23:17.707 "req_id": 1 00:23:17.707 } 00:23:17.707 Got JSON-RPC error response 00:23:17.707 response: 00:23:17.707 { 00:23:17.707 "code": -5, 00:23:17.707 "message": "Input/output error" 00:23:17.707 } 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:17.707 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.708 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:17.708 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.708 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.708 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.708 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.966 request: 00:23:17.966 { 00:23:17.966 "name": "nvme0", 00:23:17.966 "trtype": "tcp", 00:23:17.966 "traddr": "10.0.0.2", 00:23:17.966 "adrfam": "ipv4", 00:23:17.966 "trsvcid": "4420", 00:23:17.966 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:17.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:17.966 "prchk_reftag": false, 00:23:17.966 "prchk_guard": false, 00:23:17.966 "hdgst": false, 00:23:17.966 "ddgst": false, 00:23:17.966 "dhchap_key": "key1", 00:23:17.966 "dhchap_ctrlr_key": "ckey1", 00:23:17.966 "allow_unrecognized_csi": false, 00:23:17.966 "method": "bdev_nvme_attach_controller", 00:23:17.966 "req_id": 1 00:23:17.966 } 00:23:17.966 Got JSON-RPC error response 00:23:17.966 response: 00:23:17.966 { 00:23:17.966 "code": -5, 00:23:17.966 "message": "Input/output error" 00:23:17.966 } 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 349315 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 349315 ']' 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 349315 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 349315 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 349315' 00:23:18.226 killing process with pid 349315 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 349315 00:23:18.226 00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 349315 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=372166 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 372166 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 372166 ']' 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.226 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 372166 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 372166 ']' 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.485 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.743 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.743 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:18.743 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:18.743 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.743 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.002 null0 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uFu 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ucS ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ucS 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.MVE 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.KAE ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KAE 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.GwY 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.P3d ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.P3d 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.I48 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:19.003 00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:19.942 nvme0n1 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:19.942 { 00:23:19.942 "cntlid": 1, 00:23:19.942 "qid": 0, 00:23:19.942 "state": "enabled", 00:23:19.942 "thread": "nvmf_tgt_poll_group_000", 00:23:19.942 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:19.942 "listen_address": { 00:23:19.942 "trtype": "TCP", 00:23:19.942 "adrfam": "IPv4", 00:23:19.942 "traddr": "10.0.0.2", 00:23:19.942 "trsvcid": "4420" 00:23:19.942 }, 00:23:19.942 "peer_address": { 00:23:19.942 "trtype": "TCP", 00:23:19.942 "adrfam": "IPv4", 00:23:19.942 "traddr": "10.0.0.1", 00:23:19.942 "trsvcid": "56996" 00:23:19.942 }, 00:23:19.942 "auth": { 00:23:19.942 "state": "completed", 00:23:19.942 "digest": "sha512", 00:23:19.942 "dhgroup": "ffdhe8192" 00:23:19.942 } 00:23:19.942 } 00:23:19.942 ]' 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:19.942 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:20.206 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.206 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.206 00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.206 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:23:20.207 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:23:20.775 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.775 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:20.775 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.775 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.775 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.775 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:23:20.775 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.775 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.775 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.775 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:20.775 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:21.034 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:21.034 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:21.034 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:21.034 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:21.034 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.034 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:21.034 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.034 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:21.034 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:21.035 00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:21.294 request: 00:23:21.294 { 00:23:21.294 "name": "nvme0", 00:23:21.294 "trtype": "tcp", 00:23:21.294 "traddr": "10.0.0.2", 00:23:21.294 "adrfam": "ipv4", 00:23:21.294 "trsvcid": "4420", 00:23:21.294 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:21.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:21.294 "prchk_reftag": false, 00:23:21.294 "prchk_guard": false, 00:23:21.294 "hdgst": false, 00:23:21.294 "ddgst": false, 00:23:21.294 "dhchap_key": "key3", 00:23:21.294 "allow_unrecognized_csi": false, 00:23:21.294 "method": "bdev_nvme_attach_controller", 00:23:21.294 "req_id": 1 00:23:21.294 } 00:23:21.294 Got JSON-RPC error response 00:23:21.294 response: 00:23:21.294 { 00:23:21.294 "code": -5, 00:23:21.294 "message": "Input/output error" 00:23:21.294 } 00:23:21.294 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:21.294 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:21.294 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:21.294 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:21.294 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:21.294 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:21.294 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:21.294 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:21.553 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:21.553 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:21.553 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:21.553 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:21.553 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.553 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:21.553 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.553 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:21.553 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:21.553 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:21.818 request: 00:23:21.818 { 00:23:21.818 "name": "nvme0", 00:23:21.818 "trtype": "tcp", 00:23:21.818 "traddr": "10.0.0.2", 00:23:21.818 "adrfam": "ipv4", 00:23:21.818 "trsvcid": "4420", 00:23:21.818 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:21.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:21.818 "prchk_reftag": false, 00:23:21.818 "prchk_guard": false, 00:23:21.818 "hdgst": false, 00:23:21.818 "ddgst": false, 00:23:21.818 "dhchap_key": "key3", 00:23:21.818 "allow_unrecognized_csi": false, 00:23:21.818 "method": "bdev_nvme_attach_controller", 00:23:21.818 "req_id": 1 00:23:21.818 } 00:23:21.818 Got JSON-RPC error response 00:23:21.818 response: 00:23:21.818 { 00:23:21.818 "code": -5, 00:23:21.818 "message": "Input/output error" 00:23:21.818 } 00:23:21.818 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:21.818 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:21.819 00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:22.391 request: 00:23:22.391 { 00:23:22.391 "name": "nvme0", 00:23:22.391 "trtype": "tcp", 00:23:22.391 "traddr": "10.0.0.2", 00:23:22.391 "adrfam": "ipv4", 00:23:22.391 "trsvcid": "4420", 00:23:22.391 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:22.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:22.391 "prchk_reftag": false, 00:23:22.391 "prchk_guard": false, 00:23:22.391 "hdgst": false, 00:23:22.391 "ddgst": false, 00:23:22.391 "dhchap_key": "key0", 00:23:22.391 "dhchap_ctrlr_key": "key1", 00:23:22.391 "allow_unrecognized_csi": false, 00:23:22.391 "method": "bdev_nvme_attach_controller", 00:23:22.391 "req_id": 1 00:23:22.391 } 00:23:22.391 Got JSON-RPC error response 00:23:22.391 response: 00:23:22.391 { 00:23:22.391 "code": -5, 00:23:22.391 "message": "Input/output error" 00:23:22.391 } 00:23:22.391 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:22.392 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:22.392 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:22.392 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:22.392 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:22.392 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:22.392 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:22.392 nvme0n1 00:23:22.392 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:22.392 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:22.392 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.651 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.651 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.651 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.911 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:23:22.911 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.911 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.911 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.911 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:22.911 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:22.911 00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:23.849 nvme0n1 00:23:23.849 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:23.849 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.849 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:23.849 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.849 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:23.849 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.849 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.849 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.849 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:23.849 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:23.849 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.108 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.108 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:23:24.108 00:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: --dhchap-ctrl-secret DHHC-1:03:NDQxYmUwZDMwZDg4YjQ0YjFiMzM1ODgzYTZmZmQ4MzNiYmM5MTI4NmY1OWI5ZDdiMWI3NDkyZGE4NzM0NjA4MdIr6/M=: 00:23:24.676 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:24.676 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:24.676 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:24.676 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:24.676 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:24.676 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:24.676 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:24.676 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.676 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.936 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:24.936 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:24.936 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:24.936 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:24.936 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.936 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:24.936 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.936 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:24.936 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:24.936 00:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:25.195 request: 00:23:25.195 { 00:23:25.195 "name": "nvme0", 00:23:25.195 "trtype": "tcp", 00:23:25.195 "traddr": "10.0.0.2", 00:23:25.195 "adrfam": "ipv4", 00:23:25.195 "trsvcid": "4420", 00:23:25.195 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:25.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:23:25.195 "prchk_reftag": false, 00:23:25.195 "prchk_guard": false, 00:23:25.195 "hdgst": false, 00:23:25.195 "ddgst": false, 00:23:25.195 "dhchap_key": "key1", 00:23:25.195 "allow_unrecognized_csi": false, 00:23:25.195 "method": "bdev_nvme_attach_controller", 00:23:25.195 "req_id": 1 00:23:25.195 } 00:23:25.195 Got JSON-RPC error response 00:23:25.195 response: 00:23:25.195 { 00:23:25.195 "code": -5, 00:23:25.195 "message": "Input/output error" 00:23:25.195 } 00:23:25.195 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:25.195 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:25.195 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:25.195 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:25.195 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:25.195 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:25.195 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:26.134 nvme0n1 00:23:26.134 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:26.134 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:26.134 00:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.393 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.393 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.393 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.393 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:26.393 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.393 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.393 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.393 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:26.393 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:26.393 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:26.653 nvme0n1 00:23:26.653 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:26.653 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:26.653 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.912 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.913 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.913 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: '' 2s 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: ]] 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NWRjNDQ2MWI5YTg2ZmM0ZTVmNWU5NDVmYTQxZWUyN2L1+Mna: 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:27.172 00:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:29.080 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:29.080 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:29.080 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:29.080 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:29.080 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:29.080 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:29.080 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:29.080 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:29.080 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.080 00:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.080 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.080 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: 2s 00:23:29.080 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:29.080 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:29.080 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:29.080 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: 00:23:29.080 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:29.080 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:29.080 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:29.080 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: ]] 00:23:29.080 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OTM2NTI3YTY3YTg1MjM4NTljYWNjYjllMzFiODNiMjZlNzE2Yzc1MjBmMzc5OTAzLoBv8w==: 00:23:29.339 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:29.339 00:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:31.259 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:32.202 nvme0n1 00:23:32.202 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:32.202 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.202 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.202 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.202 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:32.202 00:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:32.461 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:32.461 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:32.461 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.720 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.720 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:32.720 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.720 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.720 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.720 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:32.720 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:32.979 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:32.979 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:32.979 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.237 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.237 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:33.238 00:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:33.497 request: 00:23:33.497 { 00:23:33.497 "name": "nvme0", 00:23:33.497 "dhchap_key": "key1", 00:23:33.497 "dhchap_ctrlr_key": "key3", 00:23:33.497 "method": "bdev_nvme_set_keys", 00:23:33.497 "req_id": 1 00:23:33.497 } 00:23:33.497 Got JSON-RPC error response 00:23:33.497 response: 00:23:33.497 { 00:23:33.497 "code": -13, 00:23:33.497 "message": "Permission denied" 00:23:33.497 } 00:23:33.497 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:33.497 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.497 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:33.497 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.497 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:33.497 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:33.497 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.756 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:33.757 00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:35.151 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:35.151 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:35.151 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.151 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:35.151 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:35.151 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.151 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.151 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.151 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:35.151 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:35.151 00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:35.719 nvme0n1 00:23:35.719 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:35.719 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.719 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.977 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.977 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:35.977 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:35.977 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:35.977 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:35.977 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.977 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:35.977 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:35.977 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:35.977 00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:36.236 request: 00:23:36.236 { 00:23:36.236 "name": "nvme0", 00:23:36.236 "dhchap_key": "key2", 00:23:36.236 "dhchap_ctrlr_key": "key0", 00:23:36.236 "method": "bdev_nvme_set_keys", 00:23:36.236 "req_id": 1 00:23:36.236 } 00:23:36.236 Got JSON-RPC error response 00:23:36.236 response: 00:23:36.236 { 00:23:36.236 "code": -13, 00:23:36.236 "message": "Permission denied" 00:23:36.236 } 00:23:36.236 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:36.236 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:36.236 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:36.236 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:36.236 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:36.236 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:36.236 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.499 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:36.499 00:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:37.435 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:37.435 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:37.435 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 349437 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 349437 ']' 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 349437 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 349437 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 349437' 00:23:37.694 killing process with pid 349437 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 349437 00:23:37.694 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 349437 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.263 rmmod nvme_tcp 00:23:38.263 rmmod nvme_fabrics 00:23:38.263 rmmod nvme_keyring 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 372166 ']' 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 372166 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 372166 ']' 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 372166 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.263 00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 372166 00:23:38.263 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:38.263 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 372166' 00:23:38.264 killing process with pid 372166 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 372166 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 372166 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.264 00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uFu /tmp/spdk.key-sha256.MVE /tmp/spdk.key-sha384.GwY /tmp/spdk.key-sha512.I48 /tmp/spdk.key-sha512.ucS /tmp/spdk.key-sha384.KAE /tmp/spdk.key-sha256.P3d '' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf-auth.log 00:23:40.804 00:23:40.804 real 2m37.000s 00:23:40.804 user 6m0.922s 00:23:40.804 sys 0m24.594s 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.804 ************************************ 00:23:40.804 END TEST nvmf_auth_target 00:23:40.804 ************************************ 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:40.804 ************************************ 00:23:40.804 START TEST nvmf_bdevio_no_huge 00:23:40.804 ************************************ 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:40.804 * Looking for test storage... 00:23:40.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:40.804 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:40.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.805 --rc genhtml_branch_coverage=1 00:23:40.805 --rc genhtml_function_coverage=1 00:23:40.805 --rc genhtml_legend=1 00:23:40.805 --rc geninfo_all_blocks=1 00:23:40.805 --rc geninfo_unexecuted_blocks=1 00:23:40.805 00:23:40.805 ' 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:40.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.805 --rc genhtml_branch_coverage=1 00:23:40.805 --rc genhtml_function_coverage=1 00:23:40.805 --rc genhtml_legend=1 00:23:40.805 --rc geninfo_all_blocks=1 00:23:40.805 --rc geninfo_unexecuted_blocks=1 00:23:40.805 00:23:40.805 ' 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:40.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.805 --rc genhtml_branch_coverage=1 00:23:40.805 --rc genhtml_function_coverage=1 00:23:40.805 --rc genhtml_legend=1 00:23:40.805 --rc geninfo_all_blocks=1 00:23:40.805 --rc geninfo_unexecuted_blocks=1 00:23:40.805 00:23:40.805 ' 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:40.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:40.805 --rc genhtml_branch_coverage=1 00:23:40.805 --rc genhtml_function_coverage=1 00:23:40.805 --rc genhtml_legend=1 00:23:40.805 --rc geninfo_all_blocks=1 00:23:40.805 --rc geninfo_unexecuted_blocks=1 00:23:40.805 00:23:40.805 ' 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:40.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:40.805 00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.381 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:47.382 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:47.382 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:47.382 Found net devices under 0000:86:00.0: cvl_0_0 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:47.382 Found net devices under 0000:86:00.1: cvl_0_1 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:47.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:23:47.382 00:23:47.382 --- 10.0.0.2 ping statistics --- 00:23:47.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.382 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:23:47.382 00:23:47.382 --- 10.0.0.1 ping statistics --- 00:23:47.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.382 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=378975 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 378975 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 378975 ']' 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.382 00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:47.382 [2024-12-10 00:05:21.510518] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:23:47.382 [2024-12-10 00:05:21.510560] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:47.382 [2024-12-10 00:05:21.595411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:47.383 [2024-12-10 00:05:21.643030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.383 [2024-12-10 00:05:21.643064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.383 [2024-12-10 00:05:21.643071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.383 [2024-12-10 00:05:21.643077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.383 [2024-12-10 00:05:21.643083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.383 [2024-12-10 00:05:21.648176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:47.383 [2024-12-10 00:05:21.648295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:47.383 [2024-12-10 00:05:21.648543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.383 [2024-12-10 00:05:21.648543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:47.642 [2024-12-10 00:05:22.416197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:47.642 Malloc0 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:47.642 [2024-12-10 00:05:22.460456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:47.642 { 00:23:47.642 "params": { 00:23:47.642 "name": "Nvme$subsystem", 00:23:47.642 "trtype": "$TEST_TRANSPORT", 00:23:47.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.642 "adrfam": "ipv4", 00:23:47.642 "trsvcid": "$NVMF_PORT", 00:23:47.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.642 "hdgst": ${hdgst:-false}, 00:23:47.642 "ddgst": ${ddgst:-false} 00:23:47.642 }, 00:23:47.642 "method": "bdev_nvme_attach_controller" 00:23:47.642 } 00:23:47.642 EOF 00:23:47.642 )") 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:47.642 00:05:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:47.642 "params": { 00:23:47.642 "name": "Nvme1", 00:23:47.642 "trtype": "tcp", 00:23:47.642 "traddr": "10.0.0.2", 00:23:47.642 "adrfam": "ipv4", 00:23:47.642 "trsvcid": "4420", 00:23:47.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:47.642 "hdgst": false, 00:23:47.642 "ddgst": false 00:23:47.642 }, 00:23:47.642 "method": "bdev_nvme_attach_controller" 00:23:47.642 }' 00:23:47.642 [2024-12-10 00:05:22.512134] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:23:47.642 [2024-12-10 00:05:22.512199] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid379172 ] 00:23:47.901 [2024-12-10 00:05:22.594546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:47.901 [2024-12-10 00:05:22.643618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.901 [2024-12-10 00:05:22.643725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.901 [2024-12-10 00:05:22.643726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.160 I/O targets: 00:23:48.160 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:48.160 00:23:48.160 00:23:48.160 CUnit - A unit testing framework for C - Version 2.1-3 00:23:48.160 http://cunit.sourceforge.net/ 00:23:48.160 00:23:48.160 00:23:48.160 Suite: bdevio tests on: Nvme1n1 00:23:48.160 Test: blockdev write read block ...passed 00:23:48.160 Test: blockdev write zeroes read block ...passed 00:23:48.160 Test: blockdev write zeroes read no split ...passed 00:23:48.160 Test: blockdev write zeroes read split ...passed 00:23:48.160 Test: blockdev write zeroes read split partial ...passed 00:23:48.160 Test: blockdev reset ...[2024-12-10 00:05:23.093856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:48.160 [2024-12-10 00:05:23.093918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2393510 (9): Bad file descriptor 00:23:48.419 [2024-12-10 00:05:23.188946] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:48.419 passed 00:23:48.419 Test: blockdev write read 8 blocks ...passed 00:23:48.419 Test: blockdev write read size > 128k ...passed 00:23:48.419 Test: blockdev write read invalid size ...passed 00:23:48.419 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:48.419 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:48.419 Test: blockdev write read max offset ...passed 00:23:48.677 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:48.677 Test: blockdev writev readv 8 blocks ...passed 00:23:48.677 Test: blockdev writev readv 30 x 1block ...passed 00:23:48.677 Test: blockdev writev readv block ...passed 00:23:48.677 Test: blockdev writev readv size > 128k ...passed 00:23:48.677 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:48.677 Test: blockdev comparev and writev ...[2024-12-10 00:05:23.399990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:48.677 [2024-12-10 00:05:23.400018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:48.677 [2024-12-10 00:05:23.400033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:48.677 [2024-12-10 00:05:23.400041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:48.677 [2024-12-10 00:05:23.400282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:48.677 [2024-12-10 00:05:23.400303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:48.677 [2024-12-10 00:05:23.400316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:48.677 [2024-12-10 00:05:23.400323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:48.677 [2024-12-10 00:05:23.400560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:48.677 [2024-12-10 00:05:23.400569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:48.677 [2024-12-10 00:05:23.400581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:48.677 [2024-12-10 00:05:23.400588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:48.678 [2024-12-10 00:05:23.400812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:48.678 [2024-12-10 00:05:23.400822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:48.678 [2024-12-10 00:05:23.400833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:48.678 [2024-12-10 00:05:23.400840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:48.678 passed 00:23:48.678 Test: blockdev nvme passthru rw ...passed 00:23:48.678 Test: blockdev nvme passthru vendor specific ...[2024-12-10 00:05:23.483454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:48.678 [2024-12-10 00:05:23.483469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:48.678 [2024-12-10 00:05:23.483574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:48.678 [2024-12-10 00:05:23.483583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:48.678 [2024-12-10 00:05:23.483690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:48.678 [2024-12-10 00:05:23.483699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:48.678 [2024-12-10 00:05:23.483803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:48.678 [2024-12-10 00:05:23.483818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:48.678 passed 00:23:48.678 Test: blockdev nvme admin passthru ...passed 00:23:48.678 Test: blockdev copy ...passed 00:23:48.678 00:23:48.678 Run Summary: Type Total Ran Passed Failed Inactive 00:23:48.678 suites 1 1 n/a 0 0 00:23:48.678 tests 23 23 23 0 0 00:23:48.678 asserts 152 152 152 0 n/a 00:23:48.678 00:23:48.678 Elapsed time = 1.218 seconds 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.937 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.937 rmmod nvme_tcp 00:23:48.937 rmmod nvme_fabrics 00:23:48.937 rmmod nvme_keyring 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 378975 ']' 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 378975 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 378975 ']' 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 378975 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 378975 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 378975' 00:23:49.197 killing process with pid 378975 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 378975 00:23:49.197 00:05:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 378975 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.457 00:05:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:51.997 00:23:51.997 real 0m10.979s 00:23:51.997 user 0m14.445s 00:23:51.997 sys 0m5.344s 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:51.997 ************************************ 00:23:51.997 END TEST nvmf_bdevio_no_huge 00:23:51.997 ************************************ 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:51.997 ************************************ 00:23:51.997 START TEST nvmf_tls 00:23:51.997 ************************************ 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:51.997 * Looking for test storage... 00:23:51.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:51.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.997 --rc genhtml_branch_coverage=1 00:23:51.997 --rc genhtml_function_coverage=1 00:23:51.997 --rc genhtml_legend=1 00:23:51.997 --rc geninfo_all_blocks=1 00:23:51.997 --rc geninfo_unexecuted_blocks=1 00:23:51.997 00:23:51.997 ' 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:51.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.997 --rc genhtml_branch_coverage=1 00:23:51.997 --rc genhtml_function_coverage=1 00:23:51.997 --rc genhtml_legend=1 00:23:51.997 --rc geninfo_all_blocks=1 00:23:51.997 --rc geninfo_unexecuted_blocks=1 00:23:51.997 00:23:51.997 ' 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:51.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.997 --rc genhtml_branch_coverage=1 00:23:51.997 --rc genhtml_function_coverage=1 00:23:51.997 --rc genhtml_legend=1 00:23:51.997 --rc geninfo_all_blocks=1 00:23:51.997 --rc geninfo_unexecuted_blocks=1 00:23:51.997 00:23:51.997 ' 00:23:51.997 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:51.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.997 --rc genhtml_branch_coverage=1 00:23:51.997 --rc genhtml_function_coverage=1 00:23:51.997 --rc genhtml_legend=1 00:23:51.997 --rc geninfo_all_blocks=1 00:23:51.997 --rc geninfo_unexecuted_blocks=1 00:23:51.997 00:23:51.997 ' 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:51.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:51.998 00:05:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.277 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.277 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:57.277 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:57.277 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:57.277 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:57.277 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:57.277 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:57.537 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:57.537 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:57.537 Found net devices under 0000:86:00.0: cvl_0_0 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.537 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:57.538 Found net devices under 0000:86:00.1: cvl_0_1 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:57.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.490 ms 00:23:57.538 00:23:57.538 --- 10.0.0.2 ping statistics --- 00:23:57.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.538 rtt min/avg/max/mdev = 0.490/0.490/0.490/0.000 ms 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:23:57.538 00:23:57.538 --- 10.0.0.1 ping statistics --- 00:23:57.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.538 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:57.538 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=382933 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 382933 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 382933 ']' 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.801 [2024-12-10 00:05:32.549089] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:23:57.801 [2024-12-10 00:05:32.549135] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.801 [2024-12-10 00:05:32.629566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.801 [2024-12-10 00:05:32.669641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.801 [2024-12-10 00:05:32.669675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.801 [2024-12-10 00:05:32.669683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.801 [2024-12-10 00:05:32.669689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.801 [2024-12-10 00:05:32.669693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.801 [2024-12-10 00:05:32.670253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:57.801 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:58.059 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.059 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:58.059 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:58.059 true 00:23:58.060 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:58.060 00:05:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:58.319 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:58.319 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:58.319 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:58.578 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:58.578 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:58.837 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:58.837 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:58.837 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:58.837 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:58.837 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:59.096 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:59.096 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:59.096 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:59.096 00:05:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:59.356 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:59.356 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:59.356 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:59.615 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:59.615 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:59.615 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:59.615 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:59.615 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:59.874 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:59.874 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.BkwxuxBUJv 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.fCb6x47DNS 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.BkwxuxBUJv 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.fCb6x47DNS 00:24:00.134 00:05:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:00.393 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py framework_start_init 00:24:00.652 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.BkwxuxBUJv 00:24:00.653 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.BkwxuxBUJv 00:24:00.653 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:00.912 [2024-12-10 00:05:35.612355] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.912 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:00.912 00:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:01.172 [2024-12-10 00:05:35.997334] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:01.172 [2024-12-10 00:05:35.997556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.172 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:01.431 malloc0 00:24:01.431 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:01.690 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.BkwxuxBUJv 00:24:01.690 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:01.949 00:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.BkwxuxBUJv 00:24:11.931 Initializing NVMe Controllers 00:24:11.931 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.931 Initialization complete. Launching workers. 00:24:11.931 ======================================================== 00:24:11.931 Latency(us) 00:24:11.931 Device Information : IOPS MiB/s Average min max 00:24:11.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16477.31 64.36 3884.23 857.88 4421.12 00:24:11.931 ======================================================== 00:24:11.931 Total : 16477.31 64.36 3884.23 857.88 4421.12 00:24:11.931 00:24:11.931 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BkwxuxBUJv 00:24:11.931 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:11.931 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:11.931 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:11.931 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BkwxuxBUJv 00:24:11.931 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.931 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=385292 00:24:12.191 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:12.191 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.191 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 385292 /var/tmp/bdevperf.sock 00:24:12.191 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 385292 ']' 00:24:12.191 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.191 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.191 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.191 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.191 00:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.191 [2024-12-10 00:05:46.909702] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:12.191 [2024-12-10 00:05:46.909750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385292 ] 00:24:12.191 [2024-12-10 00:05:46.984748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.191 [2024-12-10 00:05:47.026505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.191 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.191 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:12.191 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BkwxuxBUJv 00:24:12.450 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:12.710 [2024-12-10 00:05:47.479490] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:12.710 TLSTESTn1 00:24:12.710 00:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:12.970 Running I/O for 10 seconds... 00:24:14.844 4675.00 IOPS, 18.26 MiB/s [2024-12-09T23:05:50.717Z] 5019.50 IOPS, 19.61 MiB/s [2024-12-09T23:05:52.097Z] 5054.00 IOPS, 19.74 MiB/s [2024-12-09T23:05:52.666Z] 5057.50 IOPS, 19.76 MiB/s [2024-12-09T23:05:54.046Z] 5006.40 IOPS, 19.56 MiB/s [2024-12-09T23:05:54.984Z] 5066.67 IOPS, 19.79 MiB/s [2024-12-09T23:05:55.924Z] 4971.57 IOPS, 19.42 MiB/s [2024-12-09T23:05:56.862Z] 5032.88 IOPS, 19.66 MiB/s [2024-12-09T23:05:57.841Z] 5063.67 IOPS, 19.78 MiB/s [2024-12-09T23:05:57.841Z] 5104.50 IOPS, 19.94 MiB/s 00:24:22.905 Latency(us) 00:24:22.905 [2024-12-09T23:05:57.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.905 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:22.905 Verification LBA range: start 0x0 length 0x2000 00:24:22.905 TLSTESTn1 : 10.01 5110.18 19.96 0.00 0.00 25011.66 4815.47 31457.28 00:24:22.905 [2024-12-09T23:05:57.841Z] =================================================================================================================== 00:24:22.905 [2024-12-09T23:05:57.841Z] Total : 5110.18 19.96 0.00 0.00 25011.66 4815.47 31457.28 00:24:22.905 { 00:24:22.905 "results": [ 00:24:22.905 { 00:24:22.905 "job": "TLSTESTn1", 00:24:22.905 "core_mask": "0x4", 00:24:22.905 "workload": "verify", 00:24:22.905 "status": "finished", 00:24:22.905 "verify_range": { 00:24:22.905 "start": 0, 00:24:22.905 "length": 8192 00:24:22.905 }, 00:24:22.905 "queue_depth": 128, 00:24:22.905 "io_size": 4096, 00:24:22.905 "runtime": 10.013533, 00:24:22.905 "iops": 5110.184387468439, 00:24:22.905 "mibps": 19.96165776354859, 00:24:22.905 "io_failed": 0, 00:24:22.905 "io_timeout": 0, 00:24:22.905 "avg_latency_us": 25011.662970925277, 00:24:22.905 "min_latency_us": 4815.471304347826, 00:24:22.905 "max_latency_us": 31457.28 00:24:22.905 } 00:24:22.905 ], 00:24:22.905 "core_count": 1 00:24:22.905 } 00:24:22.905 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:22.905 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 385292 00:24:22.905 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 385292 ']' 00:24:22.905 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 385292 00:24:22.905 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:22.905 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.905 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 385292 00:24:22.905 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:22.905 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:22.905 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 385292' 00:24:22.905 killing process with pid 385292 00:24:22.905 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 385292 00:24:22.905 Received shutdown signal, test time was about 10.000000 seconds 00:24:22.905 00:24:22.905 Latency(us) 00:24:22.905 [2024-12-09T23:05:57.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.905 [2024-12-09T23:05:57.841Z] =================================================================================================================== 00:24:22.905 [2024-12-09T23:05:57.841Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.906 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 385292 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCb6x47DNS 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCb6x47DNS 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fCb6x47DNS 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fCb6x47DNS 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=387118 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 387118 /var/tmp/bdevperf.sock 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 387118 ']' 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.165 00:05:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.165 [2024-12-10 00:05:57.963229] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:23.165 [2024-12-10 00:05:57.963277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387118 ] 00:24:23.165 [2024-12-10 00:05:58.027253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.165 [2024-12-10 00:05:58.063711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.425 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.425 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.425 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fCb6x47DNS 00:24:23.684 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:23.684 [2024-12-10 00:05:58.547315] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.684 [2024-12-10 00:05:58.551958] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:23.684 [2024-12-10 00:05:58.552601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa03dc0 (107): Transport endpoint is not connected 00:24:23.684 [2024-12-10 00:05:58.553593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa03dc0 (9): Bad file descriptor 00:24:23.684 [2024-12-10 00:05:58.554594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:23.684 [2024-12-10 00:05:58.554605] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:23.684 [2024-12-10 00:05:58.554613] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:23.684 [2024-12-10 00:05:58.554621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:23.684 request: 00:24:23.684 { 00:24:23.684 "name": "TLSTEST", 00:24:23.684 "trtype": "tcp", 00:24:23.684 "traddr": "10.0.0.2", 00:24:23.684 "adrfam": "ipv4", 00:24:23.684 "trsvcid": "4420", 00:24:23.684 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:23.684 "prchk_reftag": false, 00:24:23.684 "prchk_guard": false, 00:24:23.684 "hdgst": false, 00:24:23.684 "ddgst": false, 00:24:23.684 "psk": "key0", 00:24:23.684 "allow_unrecognized_csi": false, 00:24:23.684 "method": "bdev_nvme_attach_controller", 00:24:23.684 "req_id": 1 00:24:23.684 } 00:24:23.684 Got JSON-RPC error response 00:24:23.684 response: 00:24:23.684 { 00:24:23.684 "code": -5, 00:24:23.684 "message": "Input/output error" 00:24:23.684 } 00:24:23.684 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 387118 00:24:23.684 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 387118 ']' 00:24:23.684 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 387118 00:24:23.684 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:23.684 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.684 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387118 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387118' 00:24:23.943 killing process with pid 387118 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 387118 00:24:23.943 Received shutdown signal, test time was about 10.000000 seconds 00:24:23.943 00:24:23.943 Latency(us) 00:24:23.943 [2024-12-09T23:05:58.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.943 [2024-12-09T23:05:58.879Z] =================================================================================================================== 00:24:23.943 [2024-12-09T23:05:58.879Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 387118 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BkwxuxBUJv 00:24:23.943 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BkwxuxBUJv 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.BkwxuxBUJv 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BkwxuxBUJv 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=387353 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 387353 /var/tmp/bdevperf.sock 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 387353 ']' 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.944 00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.944 [2024-12-10 00:05:58.836644] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:23.944 [2024-12-10 00:05:58.836692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387353 ] 00:24:24.204 [2024-12-10 00:05:58.903075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.204 [2024-12-10 00:05:58.940292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.204 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.204 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:24.204 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BkwxuxBUJv 00:24:24.463 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:24.722 [2024-12-10 00:05:59.404175] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:24.722 [2024-12-10 00:05:59.412835] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:24.722 [2024-12-10 00:05:59.412856] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:24.722 [2024-12-10 00:05:59.412894] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:24.722 [2024-12-10 00:05:59.413506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146adc0 (107): Transport endpoint is not connected 00:24:24.722 [2024-12-10 00:05:59.414500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x146adc0 (9): Bad file descriptor 00:24:24.722 [2024-12-10 00:05:59.415502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:24.722 [2024-12-10 00:05:59.415515] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:24.722 [2024-12-10 00:05:59.415522] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:24.722 [2024-12-10 00:05:59.415531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:24.722 request: 00:24:24.722 { 00:24:24.722 "name": "TLSTEST", 00:24:24.722 "trtype": "tcp", 00:24:24.722 "traddr": "10.0.0.2", 00:24:24.722 "adrfam": "ipv4", 00:24:24.722 "trsvcid": "4420", 00:24:24.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.722 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:24.722 "prchk_reftag": false, 00:24:24.722 "prchk_guard": false, 00:24:24.722 "hdgst": false, 00:24:24.722 "ddgst": false, 00:24:24.722 "psk": "key0", 00:24:24.722 "allow_unrecognized_csi": false, 00:24:24.722 "method": "bdev_nvme_attach_controller", 00:24:24.722 "req_id": 1 00:24:24.722 } 00:24:24.722 Got JSON-RPC error response 00:24:24.722 response: 00:24:24.722 { 00:24:24.722 "code": -5, 00:24:24.722 "message": "Input/output error" 00:24:24.722 } 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 387353 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 387353 ']' 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 387353 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387353 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387353' 00:24:24.722 killing process with pid 387353 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 387353 00:24:24.722 Received shutdown signal, test time was about 10.000000 seconds 00:24:24.722 00:24:24.722 Latency(us) 00:24:24.722 [2024-12-09T23:05:59.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.722 [2024-12-09T23:05:59.658Z] =================================================================================================================== 00:24:24.722 [2024-12-09T23:05:59.658Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 387353 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BkwxuxBUJv 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BkwxuxBUJv 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.BkwxuxBUJv 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.BkwxuxBUJv 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=387375 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 387375 /var/tmp/bdevperf.sock 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 387375 ']' 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.722 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.981 [2024-12-10 00:05:59.697113] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:24.981 [2024-12-10 00:05:59.697175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387375 ] 00:24:24.981 [2024-12-10 00:05:59.774919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.981 [2024-12-10 00:05:59.814222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:24.981 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.981 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:24.981 00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.BkwxuxBUJv 00:24:25.240 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:25.500 [2024-12-10 00:06:00.306668] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.500 [2024-12-10 00:06:00.315176] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:25.500 [2024-12-10 00:06:00.315199] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:25.500 [2024-12-10 00:06:00.315223] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:25.500 [2024-12-10 00:06:00.316110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9dc0 (107): Transport endpoint is not connected 00:24:25.500 [2024-12-10 00:06:00.317104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdf9dc0 (9): Bad file descriptor 00:24:25.500 [2024-12-10 00:06:00.318106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:24:25.500 [2024-12-10 00:06:00.318120] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:25.500 [2024-12-10 00:06:00.318128] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:25.500 [2024-12-10 00:06:00.318136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:24:25.500 request: 00:24:25.500 { 00:24:25.500 "name": "TLSTEST", 00:24:25.500 "trtype": "tcp", 00:24:25.500 "traddr": "10.0.0.2", 00:24:25.500 "adrfam": "ipv4", 00:24:25.500 "trsvcid": "4420", 00:24:25.500 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:25.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.500 "prchk_reftag": false, 00:24:25.500 "prchk_guard": false, 00:24:25.500 "hdgst": false, 00:24:25.500 "ddgst": false, 00:24:25.500 "psk": "key0", 00:24:25.500 "allow_unrecognized_csi": false, 00:24:25.500 "method": "bdev_nvme_attach_controller", 00:24:25.500 "req_id": 1 00:24:25.500 } 00:24:25.500 Got JSON-RPC error response 00:24:25.500 response: 00:24:25.500 { 00:24:25.500 "code": -5, 00:24:25.500 "message": "Input/output error" 00:24:25.500 } 00:24:25.500 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 387375 00:24:25.500 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 387375 ']' 00:24:25.500 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 387375 00:24:25.500 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:25.500 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.500 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387375 00:24:25.500 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:25.500 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:25.500 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387375' 00:24:25.500 killing process with pid 387375 00:24:25.500 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 387375 00:24:25.500 Received shutdown signal, test time was about 10.000000 seconds 00:24:25.500 00:24:25.500 Latency(us) 00:24:25.500 [2024-12-09T23:06:00.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.500 [2024-12-09T23:06:00.436Z] =================================================================================================================== 00:24:25.500 [2024-12-09T23:06:00.436Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:25.500 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 387375 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=387651 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 387651 /var/tmp/bdevperf.sock 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 387651 ']' 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.764 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.764 [2024-12-10 00:06:00.604953] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:25.764 [2024-12-10 00:06:00.605004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387651 ] 00:24:25.764 [2024-12-10 00:06:00.683791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.023 [2024-12-10 00:06:00.725964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.023 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.023 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:26.023 00:06:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:26.283 [2024-12-10 00:06:00.994282] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:26.283 [2024-12-10 00:06:00.994306] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:26.283 request: 00:24:26.283 { 00:24:26.283 "name": "key0", 00:24:26.283 "path": "", 00:24:26.283 "method": "keyring_file_add_key", 00:24:26.283 "req_id": 1 00:24:26.283 } 00:24:26.283 Got JSON-RPC error response 00:24:26.283 response: 00:24:26.283 { 00:24:26.283 "code": -1, 00:24:26.283 "message": "Operation not permitted" 00:24:26.283 } 00:24:26.283 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:26.283 [2024-12-10 00:06:01.198901] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:26.283 [2024-12-10 00:06:01.198934] bdev_nvme.c:6748:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:26.283 request: 00:24:26.283 { 00:24:26.283 "name": "TLSTEST", 00:24:26.283 "trtype": "tcp", 00:24:26.283 "traddr": "10.0.0.2", 00:24:26.283 "adrfam": "ipv4", 00:24:26.283 "trsvcid": "4420", 00:24:26.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:26.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:26.283 "prchk_reftag": false, 00:24:26.283 "prchk_guard": false, 00:24:26.283 "hdgst": false, 00:24:26.283 "ddgst": false, 00:24:26.283 "psk": "key0", 00:24:26.283 "allow_unrecognized_csi": false, 00:24:26.283 "method": "bdev_nvme_attach_controller", 00:24:26.283 "req_id": 1 00:24:26.283 } 00:24:26.283 Got JSON-RPC error response 00:24:26.283 response: 00:24:26.283 { 00:24:26.283 "code": -126, 00:24:26.283 "message": "Required key not available" 00:24:26.283 } 00:24:26.283 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 387651 00:24:26.283 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 387651 ']' 00:24:26.283 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 387651 00:24:26.283 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387651 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387651' 00:24:26.546 killing process with pid 387651 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 387651 00:24:26.546 Received shutdown signal, test time was about 10.000000 seconds 00:24:26.546 00:24:26.546 Latency(us) 00:24:26.546 [2024-12-09T23:06:01.482Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.546 [2024-12-09T23:06:01.482Z] =================================================================================================================== 00:24:26.546 [2024-12-09T23:06:01.482Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 387651 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 382933 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 382933 ']' 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 382933 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 382933 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 382933' 00:24:26.546 killing process with pid 382933 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 382933 00:24:26.546 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 382933 00:24:26.806 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:26.806 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:26.806 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:26.806 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:26.806 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:26.806 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.9m3ydLOiUq 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.9m3ydLOiUq 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=387978 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 387978 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 387978 ']' 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.807 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.807 [2024-12-10 00:06:01.740242] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:26.807 [2024-12-10 00:06:01.740288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.066 [2024-12-10 00:06:01.816795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.066 [2024-12-10 00:06:01.858562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.066 [2024-12-10 00:06:01.858595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.066 [2024-12-10 00:06:01.858602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.066 [2024-12-10 00:06:01.858609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.066 [2024-12-10 00:06:01.858614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.066 [2024-12-10 00:06:01.859148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.066 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.066 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:27.066 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.066 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.066 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.066 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.066 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.9m3ydLOiUq 00:24:27.066 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9m3ydLOiUq 00:24:27.066 00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:27.325 [2024-12-10 00:06:02.163446] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.325 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:27.585 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:27.844 [2024-12-10 00:06:02.548442] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:27.844 [2024-12-10 00:06:02.548650] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.844 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:27.844 malloc0 00:24:27.844 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:28.102 00:06:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9m3ydLOiUq 00:24:28.362 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9m3ydLOiUq 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9m3ydLOiUq 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=388231 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 388231 /var/tmp/bdevperf.sock 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 388231 ']' 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.621 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.621 [2024-12-10 00:06:03.355577] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:28.621 [2024-12-10 00:06:03.355621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388231 ] 00:24:28.621 [2024-12-10 00:06:03.431817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.621 [2024-12-10 00:06:03.471436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.880 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.880 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:28.880 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9m3ydLOiUq 00:24:28.880 00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:29.140 [2024-12-10 00:06:03.942921] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.140 TLSTESTn1 00:24:29.140 00:06:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:29.399 Running I/O for 10 seconds... 00:24:31.280 4860.00 IOPS, 18.98 MiB/s [2024-12-09T23:06:07.153Z] 5045.00 IOPS, 19.71 MiB/s [2024-12-09T23:06:08.530Z] 5121.00 IOPS, 20.00 MiB/s [2024-12-09T23:06:09.468Z] 4989.50 IOPS, 19.49 MiB/s [2024-12-09T23:06:10.404Z] 4899.20 IOPS, 19.14 MiB/s [2024-12-09T23:06:11.341Z] 4873.00 IOPS, 19.04 MiB/s [2024-12-09T23:06:12.277Z] 4868.43 IOPS, 19.02 MiB/s [2024-12-09T23:06:13.215Z] 4869.00 IOPS, 19.02 MiB/s [2024-12-09T23:06:14.153Z] 4902.67 IOPS, 19.15 MiB/s [2024-12-09T23:06:14.411Z] 4921.30 IOPS, 19.22 MiB/s 00:24:39.475 Latency(us) 00:24:39.475 [2024-12-09T23:06:14.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.475 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:39.475 Verification LBA range: start 0x0 length 0x2000 00:24:39.475 TLSTESTn1 : 10.02 4924.20 19.24 0.00 0.00 25954.41 5727.28 38523.77 00:24:39.475 [2024-12-09T23:06:14.411Z] =================================================================================================================== 00:24:39.475 [2024-12-09T23:06:14.411Z] Total : 4924.20 19.24 0.00 0.00 25954.41 5727.28 38523.77 00:24:39.475 { 00:24:39.475 "results": [ 00:24:39.475 { 00:24:39.475 "job": "TLSTESTn1", 00:24:39.475 "core_mask": "0x4", 00:24:39.475 "workload": "verify", 00:24:39.475 "status": "finished", 00:24:39.475 "verify_range": { 00:24:39.475 "start": 0, 00:24:39.475 "length": 8192 00:24:39.475 }, 00:24:39.475 "queue_depth": 128, 00:24:39.475 "io_size": 4096, 00:24:39.475 "runtime": 10.019896, 00:24:39.475 "iops": 4924.20280609699, 00:24:39.475 "mibps": 19.235167211316366, 00:24:39.475 "io_failed": 0, 00:24:39.475 "io_timeout": 0, 00:24:39.475 "avg_latency_us": 25954.408058511483, 00:24:39.475 "min_latency_us": 5727.276521739131, 00:24:39.475 "max_latency_us": 38523.77043478261 00:24:39.475 } 00:24:39.475 ], 00:24:39.475 "core_count": 1 00:24:39.475 } 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 388231 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 388231 ']' 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 388231 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 388231 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 388231' 00:24:39.476 killing process with pid 388231 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 388231 00:24:39.476 Received shutdown signal, test time was about 10.000000 seconds 00:24:39.476 00:24:39.476 Latency(us) 00:24:39.476 [2024-12-09T23:06:14.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.476 [2024-12-09T23:06:14.412Z] =================================================================================================================== 00:24:39.476 [2024-12-09T23:06:14.412Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 388231 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.9m3ydLOiUq 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9m3ydLOiUq 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9m3ydLOiUq 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9m3ydLOiUq 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.9m3ydLOiUq 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=390459 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 390459 /var/tmp/bdevperf.sock 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 390459 ']' 00:24:39.476 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.734 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.734 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.734 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.734 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.735 [2024-12-10 00:06:14.450874] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:39.735 [2024-12-10 00:06:14.450921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390459 ] 00:24:39.735 [2024-12-10 00:06:14.520588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.735 [2024-12-10 00:06:14.559558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.735 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.735 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:39.735 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9m3ydLOiUq 00:24:39.993 [2024-12-10 00:06:14.830323] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9m3ydLOiUq': 0100666 00:24:39.993 [2024-12-10 00:06:14.830348] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:39.993 request: 00:24:39.993 { 00:24:39.993 "name": "key0", 00:24:39.993 "path": "/tmp/tmp.9m3ydLOiUq", 00:24:39.993 "method": "keyring_file_add_key", 00:24:39.993 "req_id": 1 00:24:39.993 } 00:24:39.993 Got JSON-RPC error response 00:24:39.993 response: 00:24:39.993 { 00:24:39.993 "code": -1, 00:24:39.993 "message": "Operation not permitted" 00:24:39.993 } 00:24:39.993 00:06:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:40.253 [2024-12-10 00:06:15.018897] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:40.253 [2024-12-10 00:06:15.018927] bdev_nvme.c:6748:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:40.253 request: 00:24:40.253 { 00:24:40.253 "name": "TLSTEST", 00:24:40.253 "trtype": "tcp", 00:24:40.253 "traddr": "10.0.0.2", 00:24:40.253 "adrfam": "ipv4", 00:24:40.253 "trsvcid": "4420", 00:24:40.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:40.253 "prchk_reftag": false, 00:24:40.253 "prchk_guard": false, 00:24:40.253 "hdgst": false, 00:24:40.253 "ddgst": false, 00:24:40.253 "psk": "key0", 00:24:40.253 "allow_unrecognized_csi": false, 00:24:40.253 "method": "bdev_nvme_attach_controller", 00:24:40.253 "req_id": 1 00:24:40.253 } 00:24:40.253 Got JSON-RPC error response 00:24:40.253 response: 00:24:40.253 { 00:24:40.253 "code": -126, 00:24:40.253 "message": "Required key not available" 00:24:40.253 } 00:24:40.253 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 390459 00:24:40.253 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 390459 ']' 00:24:40.253 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 390459 00:24:40.253 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:40.253 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.253 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390459 00:24:40.253 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:40.253 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:40.253 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390459' 00:24:40.253 killing process with pid 390459 00:24:40.253 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 390459 00:24:40.253 Received shutdown signal, test time was about 10.000000 seconds 00:24:40.253 00:24:40.253 Latency(us) 00:24:40.253 [2024-12-09T23:06:15.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.253 [2024-12-09T23:06:15.190Z] =================================================================================================================== 00:24:40.254 [2024-12-09T23:06:15.190Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:40.254 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 390459 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 387978 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 387978 ']' 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 387978 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 387978 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 387978' 00:24:40.513 killing process with pid 387978 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 387978 00:24:40.513 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 387978 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=390599 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 390599 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 390599 ']' 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.772 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.772 [2024-12-10 00:06:15.517766] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:40.772 [2024-12-10 00:06:15.517813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.773 [2024-12-10 00:06:15.597108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.773 [2024-12-10 00:06:15.634270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.773 [2024-12-10 00:06:15.634306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.773 [2024-12-10 00:06:15.634313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.773 [2024-12-10 00:06:15.634319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.773 [2024-12-10 00:06:15.634325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.773 [2024-12-10 00:06:15.634892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.9m3ydLOiUq 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.9m3ydLOiUq 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.9m3ydLOiUq 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9m3ydLOiUq 00:24:41.032 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:41.032 [2024-12-10 00:06:15.951297] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.291 00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:41.291 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:41.550 [2024-12-10 00:06:16.340250] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.550 [2024-12-10 00:06:16.340457] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.550 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:41.808 malloc0 00:24:41.808 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:42.067 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9m3ydLOiUq 00:24:42.067 [2024-12-10 00:06:16.941702] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9m3ydLOiUq': 0100666 00:24:42.067 [2024-12-10 00:06:16.941729] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:42.067 request: 00:24:42.067 { 00:24:42.067 "name": "key0", 00:24:42.067 "path": "/tmp/tmp.9m3ydLOiUq", 00:24:42.067 "method": "keyring_file_add_key", 00:24:42.067 "req_id": 1 00:24:42.067 } 00:24:42.067 Got JSON-RPC error response 00:24:42.067 response: 00:24:42.067 { 00:24:42.067 "code": -1, 00:24:42.067 "message": "Operation not permitted" 00:24:42.067 } 00:24:42.067 00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:42.326 [2024-12-10 00:06:17.146248] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:42.326 [2024-12-10 00:06:17.146282] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:42.326 request: 00:24:42.326 { 00:24:42.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.326 "host": "nqn.2016-06.io.spdk:host1", 00:24:42.326 "psk": "key0", 00:24:42.326 "method": "nvmf_subsystem_add_host", 00:24:42.326 "req_id": 1 00:24:42.326 } 00:24:42.326 Got JSON-RPC error response 00:24:42.326 response: 00:24:42.326 { 00:24:42.326 "code": -32603, 00:24:42.326 "message": "Internal error" 00:24:42.326 } 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 390599 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 390599 ']' 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 390599 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390599 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390599' 00:24:42.326 killing process with pid 390599 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 390599 00:24:42.326 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 390599 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.9m3ydLOiUq 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=390969 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 390969 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 390969 ']' 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.585 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.585 [2024-12-10 00:06:17.453360] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:42.585 [2024-12-10 00:06:17.453405] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.845 [2024-12-10 00:06:17.530568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.845 [2024-12-10 00:06:17.565720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.845 [2024-12-10 00:06:17.565756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.845 [2024-12-10 00:06:17.565763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.845 [2024-12-10 00:06:17.565769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.845 [2024-12-10 00:06:17.565774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.845 [2024-12-10 00:06:17.566334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.845 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.845 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:42.846 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.846 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.846 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.846 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.846 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.9m3ydLOiUq 00:24:42.846 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9m3ydLOiUq 00:24:42.846 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:43.107 [2024-12-10 00:06:17.882502] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.107 00:06:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:43.367 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:43.367 [2024-12-10 00:06:18.279521] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.367 [2024-12-10 00:06:18.279717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.626 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:43.626 malloc0 00:24:43.626 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:43.883 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9m3ydLOiUq 00:24:44.142 00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:44.142 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:44.142 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=391231 00:24:44.142 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:44.142 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 391231 /var/tmp/bdevperf.sock 00:24:44.142 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 391231 ']' 00:24:44.142 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.142 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.142 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.142 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.142 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.401 [2024-12-10 00:06:19.091819] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:44.401 [2024-12-10 00:06:19.091868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391231 ] 00:24:44.401 [2024-12-10 00:06:19.168057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.401 [2024-12-10 00:06:19.207638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.401 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.401 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:44.401 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9m3ydLOiUq 00:24:44.660 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:44.919 [2024-12-10 00:06:19.659674] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.919 TLSTESTn1 00:24:44.919 00:06:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py save_config 00:24:45.179 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:45.179 "subsystems": [ 00:24:45.179 { 00:24:45.179 "subsystem": "keyring", 00:24:45.179 "config": [ 00:24:45.179 { 00:24:45.179 "method": "keyring_file_add_key", 00:24:45.179 "params": { 00:24:45.179 "name": "key0", 00:24:45.179 "path": "/tmp/tmp.9m3ydLOiUq" 00:24:45.179 } 00:24:45.179 } 00:24:45.179 ] 00:24:45.179 }, 00:24:45.179 { 00:24:45.179 "subsystem": "iobuf", 00:24:45.179 "config": [ 00:24:45.179 { 00:24:45.179 "method": "iobuf_set_options", 00:24:45.179 "params": { 00:24:45.179 "small_pool_count": 8192, 00:24:45.179 "large_pool_count": 1024, 00:24:45.179 "small_bufsize": 8192, 00:24:45.179 "large_bufsize": 135168, 00:24:45.179 "enable_numa": false 00:24:45.179 } 00:24:45.179 } 00:24:45.179 ] 00:24:45.179 }, 00:24:45.179 { 00:24:45.179 "subsystem": "sock", 00:24:45.179 "config": [ 00:24:45.179 { 00:24:45.179 "method": "sock_set_default_impl", 00:24:45.179 "params": { 00:24:45.179 "impl_name": "posix" 00:24:45.179 } 00:24:45.179 }, 00:24:45.179 { 00:24:45.179 "method": "sock_impl_set_options", 00:24:45.179 "params": { 00:24:45.179 "impl_name": "ssl", 00:24:45.179 "recv_buf_size": 4096, 00:24:45.179 "send_buf_size": 4096, 00:24:45.179 "enable_recv_pipe": true, 00:24:45.179 "enable_quickack": false, 00:24:45.179 "enable_placement_id": 0, 00:24:45.179 "enable_zerocopy_send_server": true, 00:24:45.179 "enable_zerocopy_send_client": false, 00:24:45.179 "zerocopy_threshold": 0, 00:24:45.179 "tls_version": 0, 00:24:45.179 "enable_ktls": false 00:24:45.179 } 00:24:45.179 }, 00:24:45.179 { 00:24:45.179 "method": "sock_impl_set_options", 00:24:45.179 "params": { 00:24:45.179 "impl_name": "posix", 00:24:45.179 "recv_buf_size": 2097152, 00:24:45.179 "send_buf_size": 2097152, 00:24:45.179 "enable_recv_pipe": true, 00:24:45.179 "enable_quickack": false, 00:24:45.179 "enable_placement_id": 0, 00:24:45.179 "enable_zerocopy_send_server": true, 00:24:45.179 "enable_zerocopy_send_client": false, 00:24:45.179 "zerocopy_threshold": 0, 00:24:45.179 "tls_version": 0, 00:24:45.179 "enable_ktls": false 00:24:45.179 } 00:24:45.179 } 00:24:45.179 ] 00:24:45.179 }, 00:24:45.179 { 00:24:45.179 "subsystem": "vmd", 00:24:45.179 "config": [] 00:24:45.179 }, 00:24:45.179 { 00:24:45.179 "subsystem": "accel", 00:24:45.179 "config": [ 00:24:45.179 { 00:24:45.179 "method": "accel_set_options", 00:24:45.179 "params": { 00:24:45.179 "small_cache_size": 128, 00:24:45.179 "large_cache_size": 16, 00:24:45.179 "task_count": 2048, 00:24:45.179 "sequence_count": 2048, 00:24:45.179 "buf_count": 2048 00:24:45.179 } 00:24:45.179 } 00:24:45.179 ] 00:24:45.179 }, 00:24:45.179 { 00:24:45.179 "subsystem": "bdev", 00:24:45.179 "config": [ 00:24:45.179 { 00:24:45.179 "method": "bdev_set_options", 00:24:45.179 "params": { 00:24:45.179 "bdev_io_pool_size": 65535, 00:24:45.179 "bdev_io_cache_size": 256, 00:24:45.179 "bdev_auto_examine": true, 00:24:45.179 "iobuf_small_cache_size": 128, 00:24:45.179 "iobuf_large_cache_size": 16 00:24:45.179 } 00:24:45.179 }, 00:24:45.179 { 00:24:45.179 "method": "bdev_raid_set_options", 00:24:45.179 "params": { 00:24:45.179 "process_window_size_kb": 1024, 00:24:45.179 "process_max_bandwidth_mb_sec": 0 00:24:45.179 } 00:24:45.179 }, 00:24:45.179 { 00:24:45.179 "method": "bdev_iscsi_set_options", 00:24:45.179 "params": { 00:24:45.179 "timeout_sec": 30 00:24:45.179 } 00:24:45.179 }, 00:24:45.179 { 00:24:45.179 "method": "bdev_nvme_set_options", 00:24:45.180 "params": { 00:24:45.180 "action_on_timeout": "none", 00:24:45.180 "timeout_us": 0, 00:24:45.180 "timeout_admin_us": 0, 00:24:45.180 "keep_alive_timeout_ms": 10000, 00:24:45.180 "arbitration_burst": 0, 00:24:45.180 "low_priority_weight": 0, 00:24:45.180 "medium_priority_weight": 0, 00:24:45.180 "high_priority_weight": 0, 00:24:45.180 "nvme_adminq_poll_period_us": 10000, 00:24:45.180 "nvme_ioq_poll_period_us": 0, 00:24:45.180 "io_queue_requests": 0, 00:24:45.180 "delay_cmd_submit": true, 00:24:45.180 "transport_retry_count": 4, 00:24:45.180 "bdev_retry_count": 3, 00:24:45.180 "transport_ack_timeout": 0, 00:24:45.180 "ctrlr_loss_timeout_sec": 0, 00:24:45.180 "reconnect_delay_sec": 0, 00:24:45.180 "fast_io_fail_timeout_sec": 0, 00:24:45.180 "disable_auto_failback": false, 00:24:45.180 "generate_uuids": false, 00:24:45.180 "transport_tos": 0, 00:24:45.180 "nvme_error_stat": false, 00:24:45.180 "rdma_srq_size": 0, 00:24:45.180 "io_path_stat": false, 00:24:45.180 "allow_accel_sequence": false, 00:24:45.180 "rdma_max_cq_size": 0, 00:24:45.180 "rdma_cm_event_timeout_ms": 0, 00:24:45.180 "dhchap_digests": [ 00:24:45.180 "sha256", 00:24:45.180 "sha384", 00:24:45.180 "sha512" 00:24:45.180 ], 00:24:45.180 "dhchap_dhgroups": [ 00:24:45.180 "null", 00:24:45.180 "ffdhe2048", 00:24:45.180 "ffdhe3072", 00:24:45.180 "ffdhe4096", 00:24:45.180 "ffdhe6144", 00:24:45.180 "ffdhe8192" 00:24:45.180 ], 00:24:45.180 "rdma_umr_per_io": false 00:24:45.180 } 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "method": "bdev_nvme_set_hotplug", 00:24:45.180 "params": { 00:24:45.180 "period_us": 100000, 00:24:45.180 "enable": false 00:24:45.180 } 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "method": "bdev_malloc_create", 00:24:45.180 "params": { 00:24:45.180 "name": "malloc0", 00:24:45.180 "num_blocks": 8192, 00:24:45.180 "block_size": 4096, 00:24:45.180 "physical_block_size": 4096, 00:24:45.180 "uuid": "11119325-be03-419d-abfe-9be5952d6fd9", 00:24:45.180 "optimal_io_boundary": 0, 00:24:45.180 "md_size": 0, 00:24:45.180 "dif_type": 0, 00:24:45.180 "dif_is_head_of_md": false, 00:24:45.180 "dif_pi_format": 0 00:24:45.180 } 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "method": "bdev_wait_for_examine" 00:24:45.180 } 00:24:45.180 ] 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "subsystem": "nbd", 00:24:45.180 "config": [] 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "subsystem": "scheduler", 00:24:45.180 "config": [ 00:24:45.180 { 00:24:45.180 "method": "framework_set_scheduler", 00:24:45.180 "params": { 00:24:45.180 "name": "static" 00:24:45.180 } 00:24:45.180 } 00:24:45.180 ] 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "subsystem": "nvmf", 00:24:45.180 "config": [ 00:24:45.180 { 00:24:45.180 "method": "nvmf_set_config", 00:24:45.180 "params": { 00:24:45.180 "discovery_filter": "match_any", 00:24:45.180 "admin_cmd_passthru": { 00:24:45.180 "identify_ctrlr": false 00:24:45.180 }, 00:24:45.180 "dhchap_digests": [ 00:24:45.180 "sha256", 00:24:45.180 "sha384", 00:24:45.180 "sha512" 00:24:45.180 ], 00:24:45.180 "dhchap_dhgroups": [ 00:24:45.180 "null", 00:24:45.180 "ffdhe2048", 00:24:45.180 "ffdhe3072", 00:24:45.180 "ffdhe4096", 00:24:45.180 "ffdhe6144", 00:24:45.180 "ffdhe8192" 00:24:45.180 ] 00:24:45.180 } 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "method": "nvmf_set_max_subsystems", 00:24:45.180 "params": { 00:24:45.180 "max_subsystems": 1024 00:24:45.180 } 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "method": "nvmf_set_crdt", 00:24:45.180 "params": { 00:24:45.180 "crdt1": 0, 00:24:45.180 "crdt2": 0, 00:24:45.180 "crdt3": 0 00:24:45.180 } 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "method": "nvmf_create_transport", 00:24:45.180 "params": { 00:24:45.180 "trtype": "TCP", 00:24:45.180 "max_queue_depth": 128, 00:24:45.180 "max_io_qpairs_per_ctrlr": 127, 00:24:45.180 "in_capsule_data_size": 4096, 00:24:45.180 "max_io_size": 131072, 00:24:45.180 "io_unit_size": 131072, 00:24:45.180 "max_aq_depth": 128, 00:24:45.180 "num_shared_buffers": 511, 00:24:45.180 "buf_cache_size": 4294967295, 00:24:45.180 "dif_insert_or_strip": false, 00:24:45.180 "zcopy": false, 00:24:45.180 "c2h_success": false, 00:24:45.180 "sock_priority": 0, 00:24:45.180 "abort_timeout_sec": 1, 00:24:45.180 "ack_timeout": 0, 00:24:45.180 "data_wr_pool_size": 0 00:24:45.180 } 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "method": "nvmf_create_subsystem", 00:24:45.180 "params": { 00:24:45.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.180 "allow_any_host": false, 00:24:45.180 "serial_number": "SPDK00000000000001", 00:24:45.180 "model_number": "SPDK bdev Controller", 00:24:45.180 "max_namespaces": 10, 00:24:45.180 "min_cntlid": 1, 00:24:45.180 "max_cntlid": 65519, 00:24:45.180 "ana_reporting": false 00:24:45.180 } 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "method": "nvmf_subsystem_add_host", 00:24:45.180 "params": { 00:24:45.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.180 "host": "nqn.2016-06.io.spdk:host1", 00:24:45.180 "psk": "key0" 00:24:45.180 } 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "method": "nvmf_subsystem_add_ns", 00:24:45.180 "params": { 00:24:45.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.180 "namespace": { 00:24:45.180 "nsid": 1, 00:24:45.180 "bdev_name": "malloc0", 00:24:45.180 "nguid": "11119325BE03419DABFE9BE5952D6FD9", 00:24:45.180 "uuid": "11119325-be03-419d-abfe-9be5952d6fd9", 00:24:45.180 "no_auto_visible": false 00:24:45.180 } 00:24:45.180 } 00:24:45.180 }, 00:24:45.180 { 00:24:45.180 "method": "nvmf_subsystem_add_listener", 00:24:45.180 "params": { 00:24:45.180 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.180 "listen_address": { 00:24:45.180 "trtype": "TCP", 00:24:45.180 "adrfam": "IPv4", 00:24:45.180 "traddr": "10.0.0.2", 00:24:45.180 "trsvcid": "4420" 00:24:45.180 }, 00:24:45.180 "secure_channel": true 00:24:45.180 } 00:24:45.180 } 00:24:45.180 ] 00:24:45.180 } 00:24:45.180 ] 00:24:45.180 }' 00:24:45.180 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:45.447 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:45.447 "subsystems": [ 00:24:45.447 { 00:24:45.447 "subsystem": "keyring", 00:24:45.447 "config": [ 00:24:45.447 { 00:24:45.447 "method": "keyring_file_add_key", 00:24:45.447 "params": { 00:24:45.447 "name": "key0", 00:24:45.447 "path": "/tmp/tmp.9m3ydLOiUq" 00:24:45.447 } 00:24:45.447 } 00:24:45.447 ] 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "subsystem": "iobuf", 00:24:45.447 "config": [ 00:24:45.447 { 00:24:45.447 "method": "iobuf_set_options", 00:24:45.447 "params": { 00:24:45.447 "small_pool_count": 8192, 00:24:45.447 "large_pool_count": 1024, 00:24:45.447 "small_bufsize": 8192, 00:24:45.447 "large_bufsize": 135168, 00:24:45.447 "enable_numa": false 00:24:45.447 } 00:24:45.447 } 00:24:45.447 ] 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "subsystem": "sock", 00:24:45.447 "config": [ 00:24:45.447 { 00:24:45.447 "method": "sock_set_default_impl", 00:24:45.447 "params": { 00:24:45.447 "impl_name": "posix" 00:24:45.447 } 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "method": "sock_impl_set_options", 00:24:45.447 "params": { 00:24:45.447 "impl_name": "ssl", 00:24:45.447 "recv_buf_size": 4096, 00:24:45.447 "send_buf_size": 4096, 00:24:45.447 "enable_recv_pipe": true, 00:24:45.447 "enable_quickack": false, 00:24:45.447 "enable_placement_id": 0, 00:24:45.447 "enable_zerocopy_send_server": true, 00:24:45.447 "enable_zerocopy_send_client": false, 00:24:45.447 "zerocopy_threshold": 0, 00:24:45.447 "tls_version": 0, 00:24:45.447 "enable_ktls": false 00:24:45.447 } 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "method": "sock_impl_set_options", 00:24:45.447 "params": { 00:24:45.447 "impl_name": "posix", 00:24:45.447 "recv_buf_size": 2097152, 00:24:45.447 "send_buf_size": 2097152, 00:24:45.447 "enable_recv_pipe": true, 00:24:45.447 "enable_quickack": false, 00:24:45.447 "enable_placement_id": 0, 00:24:45.447 "enable_zerocopy_send_server": true, 00:24:45.447 "enable_zerocopy_send_client": false, 00:24:45.447 "zerocopy_threshold": 0, 00:24:45.447 "tls_version": 0, 00:24:45.447 "enable_ktls": false 00:24:45.447 } 00:24:45.447 } 00:24:45.447 ] 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "subsystem": "vmd", 00:24:45.447 "config": [] 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "subsystem": "accel", 00:24:45.447 "config": [ 00:24:45.447 { 00:24:45.447 "method": "accel_set_options", 00:24:45.447 "params": { 00:24:45.447 "small_cache_size": 128, 00:24:45.447 "large_cache_size": 16, 00:24:45.447 "task_count": 2048, 00:24:45.447 "sequence_count": 2048, 00:24:45.447 "buf_count": 2048 00:24:45.447 } 00:24:45.447 } 00:24:45.447 ] 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "subsystem": "bdev", 00:24:45.447 "config": [ 00:24:45.447 { 00:24:45.447 "method": "bdev_set_options", 00:24:45.447 "params": { 00:24:45.447 "bdev_io_pool_size": 65535, 00:24:45.447 "bdev_io_cache_size": 256, 00:24:45.447 "bdev_auto_examine": true, 00:24:45.447 "iobuf_small_cache_size": 128, 00:24:45.447 "iobuf_large_cache_size": 16 00:24:45.447 } 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "method": "bdev_raid_set_options", 00:24:45.447 "params": { 00:24:45.447 "process_window_size_kb": 1024, 00:24:45.447 "process_max_bandwidth_mb_sec": 0 00:24:45.447 } 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "method": "bdev_iscsi_set_options", 00:24:45.447 "params": { 00:24:45.447 "timeout_sec": 30 00:24:45.447 } 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "method": "bdev_nvme_set_options", 00:24:45.447 "params": { 00:24:45.447 "action_on_timeout": "none", 00:24:45.447 "timeout_us": 0, 00:24:45.447 "timeout_admin_us": 0, 00:24:45.447 "keep_alive_timeout_ms": 10000, 00:24:45.447 "arbitration_burst": 0, 00:24:45.447 "low_priority_weight": 0, 00:24:45.447 "medium_priority_weight": 0, 00:24:45.447 "high_priority_weight": 0, 00:24:45.447 "nvme_adminq_poll_period_us": 10000, 00:24:45.447 "nvme_ioq_poll_period_us": 0, 00:24:45.447 "io_queue_requests": 512, 00:24:45.447 "delay_cmd_submit": true, 00:24:45.447 "transport_retry_count": 4, 00:24:45.447 "bdev_retry_count": 3, 00:24:45.447 "transport_ack_timeout": 0, 00:24:45.447 "ctrlr_loss_timeout_sec": 0, 00:24:45.447 "reconnect_delay_sec": 0, 00:24:45.447 "fast_io_fail_timeout_sec": 0, 00:24:45.447 "disable_auto_failback": false, 00:24:45.447 "generate_uuids": false, 00:24:45.447 "transport_tos": 0, 00:24:45.447 "nvme_error_stat": false, 00:24:45.447 "rdma_srq_size": 0, 00:24:45.447 "io_path_stat": false, 00:24:45.447 "allow_accel_sequence": false, 00:24:45.447 "rdma_max_cq_size": 0, 00:24:45.447 "rdma_cm_event_timeout_ms": 0, 00:24:45.447 "dhchap_digests": [ 00:24:45.447 "sha256", 00:24:45.447 "sha384", 00:24:45.447 "sha512" 00:24:45.447 ], 00:24:45.447 "dhchap_dhgroups": [ 00:24:45.447 "null", 00:24:45.447 "ffdhe2048", 00:24:45.447 "ffdhe3072", 00:24:45.447 "ffdhe4096", 00:24:45.447 "ffdhe6144", 00:24:45.447 "ffdhe8192" 00:24:45.447 ], 00:24:45.447 "rdma_umr_per_io": false 00:24:45.447 } 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "method": "bdev_nvme_attach_controller", 00:24:45.447 "params": { 00:24:45.447 "name": "TLSTEST", 00:24:45.447 "trtype": "TCP", 00:24:45.447 "adrfam": "IPv4", 00:24:45.447 "traddr": "10.0.0.2", 00:24:45.447 "trsvcid": "4420", 00:24:45.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.447 "prchk_reftag": false, 00:24:45.447 "prchk_guard": false, 00:24:45.447 "ctrlr_loss_timeout_sec": 0, 00:24:45.447 "reconnect_delay_sec": 0, 00:24:45.447 "fast_io_fail_timeout_sec": 0, 00:24:45.447 "psk": "key0", 00:24:45.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:45.447 "hdgst": false, 00:24:45.447 "ddgst": false, 00:24:45.447 "multipath": "multipath" 00:24:45.447 } 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "method": "bdev_nvme_set_hotplug", 00:24:45.447 "params": { 00:24:45.447 "period_us": 100000, 00:24:45.447 "enable": false 00:24:45.447 } 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "method": "bdev_wait_for_examine" 00:24:45.447 } 00:24:45.447 ] 00:24:45.447 }, 00:24:45.447 { 00:24:45.447 "subsystem": "nbd", 00:24:45.447 "config": [] 00:24:45.447 } 00:24:45.447 ] 00:24:45.447 }' 00:24:45.447 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 391231 00:24:45.447 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 391231 ']' 00:24:45.447 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 391231 00:24:45.447 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:45.447 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.447 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391231 00:24:45.447 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:45.447 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:45.447 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391231' 00:24:45.447 killing process with pid 391231 00:24:45.447 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 391231 00:24:45.447 Received shutdown signal, test time was about 10.000000 seconds 00:24:45.447 00:24:45.448 Latency(us) 00:24:45.448 [2024-12-09T23:06:20.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.448 [2024-12-09T23:06:20.384Z] =================================================================================================================== 00:24:45.448 [2024-12-09T23:06:20.384Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:45.448 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 391231 00:24:45.708 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 390969 00:24:45.708 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 390969 ']' 00:24:45.708 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 390969 00:24:45.708 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:45.708 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.708 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 390969 00:24:45.708 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:45.708 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:45.708 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 390969' 00:24:45.708 killing process with pid 390969 00:24:45.708 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 390969 00:24:45.708 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 390969 00:24:45.968 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:45.968 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:45.968 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:45.968 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:45.968 "subsystems": [ 00:24:45.968 { 00:24:45.968 "subsystem": "keyring", 00:24:45.968 "config": [ 00:24:45.968 { 00:24:45.968 "method": "keyring_file_add_key", 00:24:45.968 "params": { 00:24:45.968 "name": "key0", 00:24:45.968 "path": "/tmp/tmp.9m3ydLOiUq" 00:24:45.968 } 00:24:45.968 } 00:24:45.968 ] 00:24:45.968 }, 00:24:45.968 { 00:24:45.968 "subsystem": "iobuf", 00:24:45.968 "config": [ 00:24:45.968 { 00:24:45.968 "method": "iobuf_set_options", 00:24:45.968 "params": { 00:24:45.968 "small_pool_count": 8192, 00:24:45.968 "large_pool_count": 1024, 00:24:45.968 "small_bufsize": 8192, 00:24:45.968 "large_bufsize": 135168, 00:24:45.968 "enable_numa": false 00:24:45.968 } 00:24:45.968 } 00:24:45.968 ] 00:24:45.968 }, 00:24:45.968 { 00:24:45.968 "subsystem": "sock", 00:24:45.968 "config": [ 00:24:45.968 { 00:24:45.968 "method": "sock_set_default_impl", 00:24:45.968 "params": { 00:24:45.968 "impl_name": "posix" 00:24:45.968 } 00:24:45.968 }, 00:24:45.968 { 00:24:45.968 "method": "sock_impl_set_options", 00:24:45.968 "params": { 00:24:45.968 "impl_name": "ssl", 00:24:45.968 "recv_buf_size": 4096, 00:24:45.968 "send_buf_size": 4096, 00:24:45.968 "enable_recv_pipe": true, 00:24:45.968 "enable_quickack": false, 00:24:45.968 "enable_placement_id": 0, 00:24:45.968 "enable_zerocopy_send_server": true, 00:24:45.968 "enable_zerocopy_send_client": false, 00:24:45.968 "zerocopy_threshold": 0, 00:24:45.968 "tls_version": 0, 00:24:45.968 "enable_ktls": false 00:24:45.968 } 00:24:45.968 }, 00:24:45.968 { 00:24:45.968 "method": "sock_impl_set_options", 00:24:45.968 "params": { 00:24:45.968 "impl_name": "posix", 00:24:45.968 "recv_buf_size": 2097152, 00:24:45.968 "send_buf_size": 2097152, 00:24:45.968 "enable_recv_pipe": true, 00:24:45.969 "enable_quickack": false, 00:24:45.969 "enable_placement_id": 0, 00:24:45.969 "enable_zerocopy_send_server": true, 00:24:45.969 "enable_zerocopy_send_client": false, 00:24:45.969 "zerocopy_threshold": 0, 00:24:45.969 "tls_version": 0, 00:24:45.969 "enable_ktls": false 00:24:45.969 } 00:24:45.969 } 00:24:45.969 ] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "vmd", 00:24:45.969 "config": [] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "accel", 00:24:45.969 "config": [ 00:24:45.969 { 00:24:45.969 "method": "accel_set_options", 00:24:45.969 "params": { 00:24:45.969 "small_cache_size": 128, 00:24:45.969 "large_cache_size": 16, 00:24:45.969 "task_count": 2048, 00:24:45.969 "sequence_count": 2048, 00:24:45.969 "buf_count": 2048 00:24:45.969 } 00:24:45.969 } 00:24:45.969 ] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "bdev", 00:24:45.969 "config": [ 00:24:45.969 { 00:24:45.969 "method": "bdev_set_options", 00:24:45.969 "params": { 00:24:45.969 "bdev_io_pool_size": 65535, 00:24:45.969 "bdev_io_cache_size": 256, 00:24:45.969 "bdev_auto_examine": true, 00:24:45.969 "iobuf_small_cache_size": 128, 00:24:45.969 "iobuf_large_cache_size": 16 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_raid_set_options", 00:24:45.969 "params": { 00:24:45.969 "process_window_size_kb": 1024, 00:24:45.969 "process_max_bandwidth_mb_sec": 0 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_iscsi_set_options", 00:24:45.969 "params": { 00:24:45.969 "timeout_sec": 30 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_nvme_set_options", 00:24:45.969 "params": { 00:24:45.969 "action_on_timeout": "none", 00:24:45.969 "timeout_us": 0, 00:24:45.969 "timeout_admin_us": 0, 00:24:45.969 "keep_alive_timeout_ms": 10000, 00:24:45.969 "arbitration_burst": 0, 00:24:45.969 "low_priority_weight": 0, 00:24:45.969 "medium_priority_weight": 0, 00:24:45.969 "high_priority_weight": 0, 00:24:45.969 "nvme_adminq_poll_period_us": 10000, 00:24:45.969 "nvme_ioq_poll_period_us": 0, 00:24:45.969 "io_queue_requests": 0, 00:24:45.969 "delay_cmd_submit": true, 00:24:45.969 "transport_retry_count": 4, 00:24:45.969 "bdev_retry_count": 3, 00:24:45.969 "transport_ack_timeout": 0, 00:24:45.969 "ctrlr_loss_timeout_sec": 0, 00:24:45.969 "reconnect_delay_sec": 0, 00:24:45.969 "fast_io_fail_timeout_sec": 0, 00:24:45.969 "disable_auto_failback": false, 00:24:45.969 "generate_uuids": false, 00:24:45.969 "transport_tos": 0, 00:24:45.969 "nvme_error_stat": false, 00:24:45.969 "rdma_srq_size": 0, 00:24:45.969 "io_path_stat": false, 00:24:45.969 "allow_accel_sequence": false, 00:24:45.969 "rdma_max_cq_size": 0, 00:24:45.969 "rdma_cm_event_timeout_ms": 0, 00:24:45.969 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.969 "dhchap_digests": [ 00:24:45.969 "sha256", 00:24:45.969 "sha384", 00:24:45.969 "sha512" 00:24:45.969 ], 00:24:45.969 "dhchap_dhgroups": [ 00:24:45.969 "null", 00:24:45.969 "ffdhe2048", 00:24:45.969 "ffdhe3072", 00:24:45.969 "ffdhe4096", 00:24:45.969 "ffdhe6144", 00:24:45.969 "ffdhe8192" 00:24:45.969 ], 00:24:45.969 "rdma_umr_per_io": false 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_nvme_set_hotplug", 00:24:45.969 "params": { 00:24:45.969 "period_us": 100000, 00:24:45.969 "enable": false 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_malloc_create", 00:24:45.969 "params": { 00:24:45.969 "name": "malloc0", 00:24:45.969 "num_blocks": 8192, 00:24:45.969 "block_size": 4096, 00:24:45.969 "physical_block_size": 4096, 00:24:45.969 "uuid": "11119325-be03-419d-abfe-9be5952d6fd9", 00:24:45.969 "optimal_io_boundary": 0, 00:24:45.969 "md_size": 0, 00:24:45.969 "dif_type": 0, 00:24:45.969 "dif_is_head_of_md": false, 00:24:45.969 "dif_pi_format": 0 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "bdev_wait_for_examine" 00:24:45.969 } 00:24:45.969 ] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "nbd", 00:24:45.969 "config": [] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "scheduler", 00:24:45.969 "config": [ 00:24:45.969 { 00:24:45.969 "method": "framework_set_scheduler", 00:24:45.969 "params": { 00:24:45.969 "name": "static" 00:24:45.969 } 00:24:45.969 } 00:24:45.969 ] 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "subsystem": "nvmf", 00:24:45.969 "config": [ 00:24:45.969 { 00:24:45.969 "method": "nvmf_set_config", 00:24:45.969 "params": { 00:24:45.969 "discovery_filter": "match_any", 00:24:45.969 "admin_cmd_passthru": { 00:24:45.969 "identify_ctrlr": false 00:24:45.969 }, 00:24:45.969 "dhchap_digests": [ 00:24:45.969 "sha256", 00:24:45.969 "sha384", 00:24:45.969 "sha512" 00:24:45.969 ], 00:24:45.969 "dhchap_dhgroups": [ 00:24:45.969 "null", 00:24:45.969 "ffdhe2048", 00:24:45.969 "ffdhe3072", 00:24:45.969 "ffdhe4096", 00:24:45.969 "ffdhe6144", 00:24:45.969 "ffdhe8192" 00:24:45.969 ] 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "nvmf_set_max_subsystems", 00:24:45.969 "params": { 00:24:45.969 "max_subsystems": 1024 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "nvmf_set_crdt", 00:24:45.969 "params": { 00:24:45.969 "crdt1": 0, 00:24:45.969 "crdt2": 0, 00:24:45.969 "crdt3": 0 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "nvmf_create_transport", 00:24:45.969 "params": { 00:24:45.969 "trtype": "TCP", 00:24:45.969 "max_queue_depth": 128, 00:24:45.969 "max_io_qpairs_per_ctrlr": 127, 00:24:45.969 "in_capsule_data_size": 4096, 00:24:45.969 "max_io_size": 131072, 00:24:45.969 "io_unit_size": 131072, 00:24:45.969 "max_aq_depth": 128, 00:24:45.969 "num_shared_buffers": 511, 00:24:45.969 "buf_cache_size": 4294967295, 00:24:45.969 "dif_insert_or_strip": false, 00:24:45.969 "zcopy": false, 00:24:45.969 "c2h_success": false, 00:24:45.969 "sock_priority": 0, 00:24:45.969 "abort_timeout_sec": 1, 00:24:45.969 "ack_timeout": 0, 00:24:45.969 "data_wr_pool_size": 0 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "nvmf_create_subsystem", 00:24:45.969 "params": { 00:24:45.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.969 "allow_any_host": false, 00:24:45.969 "serial_number": "SPDK00000000000001", 00:24:45.969 "model_number": "SPDK bdev Controller", 00:24:45.969 "max_namespaces": 10, 00:24:45.969 "min_cntlid": 1, 00:24:45.969 "max_cntlid": 65519, 00:24:45.969 "ana_reporting": false 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "nvmf_subsystem_add_host", 00:24:45.969 "params": { 00:24:45.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.969 "host": "nqn.2016-06.io.spdk:host1", 00:24:45.969 "psk": "key0" 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "nvmf_subsystem_add_ns", 00:24:45.969 "params": { 00:24:45.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.969 "namespace": { 00:24:45.969 "nsid": 1, 00:24:45.969 "bdev_name": "malloc0", 00:24:45.969 "nguid": "11119325BE03419DABFE9BE5952D6FD9", 00:24:45.969 "uuid": "11119325-be03-419d-abfe-9be5952d6fd9", 00:24:45.969 "no_auto_visible": false 00:24:45.969 } 00:24:45.969 } 00:24:45.969 }, 00:24:45.969 { 00:24:45.969 "method": "nvmf_subsystem_add_listener", 00:24:45.969 "params": { 00:24:45.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.969 "listen_address": { 00:24:45.969 "trtype": "TCP", 00:24:45.970 "adrfam": "IPv4", 00:24:45.970 "traddr": "10.0.0.2", 00:24:45.970 "trsvcid": "4420" 00:24:45.970 }, 00:24:45.970 "secure_channel": true 00:24:45.970 } 00:24:45.970 } 00:24:45.970 ] 00:24:45.970 } 00:24:45.970 ] 00:24:45.970 }' 00:24:45.970 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=391480 00:24:45.970 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:45.970 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 391480 00:24:45.970 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 391480 ']' 00:24:45.970 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.970 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.970 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.970 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.970 00:06:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.970 [2024-12-10 00:06:20.803855] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:45.970 [2024-12-10 00:06:20.803898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.970 [2024-12-10 00:06:20.883450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.233 [2024-12-10 00:06:20.925024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.233 [2024-12-10 00:06:20.925055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.233 [2024-12-10 00:06:20.925063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.233 [2024-12-10 00:06:20.925070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.233 [2024-12-10 00:06:20.925075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.233 [2024-12-10 00:06:20.925613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.233 [2024-12-10 00:06:21.139300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.491 [2024-12-10 00:06:21.171334] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:46.491 [2024-12-10 00:06:21.171553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=391726 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 391726 /var/tmp/bdevperf.sock 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 391726 ']' 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.752 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:46.752 "subsystems": [ 00:24:46.752 { 00:24:46.752 "subsystem": "keyring", 00:24:46.752 "config": [ 00:24:46.752 { 00:24:46.752 "method": "keyring_file_add_key", 00:24:46.752 "params": { 00:24:46.752 "name": "key0", 00:24:46.752 "path": "/tmp/tmp.9m3ydLOiUq" 00:24:46.752 } 00:24:46.752 } 00:24:46.752 ] 00:24:46.752 }, 00:24:46.752 { 00:24:46.752 "subsystem": "iobuf", 00:24:46.752 "config": [ 00:24:46.752 { 00:24:46.752 "method": "iobuf_set_options", 00:24:46.752 "params": { 00:24:46.752 "small_pool_count": 8192, 00:24:46.752 "large_pool_count": 1024, 00:24:46.752 "small_bufsize": 8192, 00:24:46.752 "large_bufsize": 135168, 00:24:46.752 "enable_numa": false 00:24:46.752 } 00:24:46.752 } 00:24:46.752 ] 00:24:46.752 }, 00:24:46.752 { 00:24:46.752 "subsystem": "sock", 00:24:46.752 "config": [ 00:24:46.752 { 00:24:46.752 "method": "sock_set_default_impl", 00:24:46.752 "params": { 00:24:46.752 "impl_name": "posix" 00:24:46.752 } 00:24:46.752 }, 00:24:46.752 { 00:24:46.752 "method": "sock_impl_set_options", 00:24:46.752 "params": { 00:24:46.752 "impl_name": "ssl", 00:24:46.752 "recv_buf_size": 4096, 00:24:46.752 "send_buf_size": 4096, 00:24:46.752 "enable_recv_pipe": true, 00:24:46.752 "enable_quickack": false, 00:24:46.752 "enable_placement_id": 0, 00:24:46.752 "enable_zerocopy_send_server": true, 00:24:46.752 "enable_zerocopy_send_client": false, 00:24:46.752 "zerocopy_threshold": 0, 00:24:46.752 "tls_version": 0, 00:24:46.752 "enable_ktls": false 00:24:46.752 } 00:24:46.752 }, 00:24:46.752 { 00:24:46.752 "method": "sock_impl_set_options", 00:24:46.752 "params": { 00:24:46.752 "impl_name": "posix", 00:24:46.752 "recv_buf_size": 2097152, 00:24:46.752 "send_buf_size": 2097152, 00:24:46.752 "enable_recv_pipe": true, 00:24:46.752 "enable_quickack": false, 00:24:46.752 "enable_placement_id": 0, 00:24:46.752 "enable_zerocopy_send_server": true, 00:24:46.752 "enable_zerocopy_send_client": false, 00:24:46.752 "zerocopy_threshold": 0, 00:24:46.752 "tls_version": 0, 00:24:46.752 "enable_ktls": false 00:24:46.752 } 00:24:46.752 } 00:24:46.752 ] 00:24:46.752 }, 00:24:46.752 { 00:24:46.752 "subsystem": "vmd", 00:24:46.752 "config": [] 00:24:46.752 }, 00:24:46.752 { 00:24:46.752 "subsystem": "accel", 00:24:46.752 "config": [ 00:24:46.752 { 00:24:46.752 "method": "accel_set_options", 00:24:46.752 "params": { 00:24:46.752 "small_cache_size": 128, 00:24:46.752 "large_cache_size": 16, 00:24:46.752 "task_count": 2048, 00:24:46.752 "sequence_count": 2048, 00:24:46.752 "buf_count": 2048 00:24:46.752 } 00:24:46.752 } 00:24:46.752 ] 00:24:46.752 }, 00:24:46.752 { 00:24:46.752 "subsystem": "bdev", 00:24:46.752 "config": [ 00:24:46.752 { 00:24:46.752 "method": "bdev_set_options", 00:24:46.752 "params": { 00:24:46.752 "bdev_io_pool_size": 65535, 00:24:46.752 "bdev_io_cache_size": 256, 00:24:46.752 "bdev_auto_examine": true, 00:24:46.752 "iobuf_small_cache_size": 128, 00:24:46.752 "iobuf_large_cache_size": 16 00:24:46.752 } 00:24:46.752 }, 00:24:46.752 { 00:24:46.752 "method": "bdev_raid_set_options", 00:24:46.752 "params": { 00:24:46.752 "process_window_size_kb": 1024, 00:24:46.752 "process_max_bandwidth_mb_sec": 0 00:24:46.752 } 00:24:46.752 }, 00:24:46.752 { 00:24:46.752 "method": "bdev_iscsi_set_options", 00:24:46.752 "params": { 00:24:46.752 "timeout_sec": 30 00:24:46.752 } 00:24:46.752 }, 00:24:46.752 { 00:24:46.752 "method": "bdev_nvme_set_options", 00:24:46.752 "params": { 00:24:46.752 "action_on_timeout": "none", 00:24:46.752 "timeout_us": 0, 00:24:46.752 "timeout_admin_us": 0, 00:24:46.752 "keep_alive_timeout_ms": 10000, 00:24:46.752 "arbitration_burst": 0, 00:24:46.752 "low_priority_weight": 0, 00:24:46.752 "medium_priority_weight": 0, 00:24:46.752 "high_priority_weight": 0, 00:24:46.752 "nvme_adminq_poll_period_us": 10000, 00:24:46.752 "nvme_ioq_poll_period_us": 0, 00:24:46.752 "io_queue_requests": 512, 00:24:46.752 "delay_cmd_submit": true, 00:24:46.752 "transport_retry_count": 4, 00:24:46.752 "bdev_retry_count": 3, 00:24:46.752 "transport_ack_timeout": 0, 00:24:46.752 "ctrlr_loss_timeout_sec": 0, 00:24:46.752 "reconnect_delay_sec": 0, 00:24:46.752 "fast_io_fail_timeout_sec": 0, 00:24:46.752 "disable_auto_failback": false, 00:24:46.752 "generate_uuids": false, 00:24:46.752 "transport_tos": 0, 00:24:46.752 "nvme_error_stat": false, 00:24:46.752 "rdma_srq_size": 0, 00:24:46.752 "io_path_stat": false, 00:24:46.752 "allow_accel_sequence": false, 00:24:46.752 "rdma_max_cq_size": 0, 00:24:46.752 "rdma_cm_event_timeout_ms": 0, 00:24:46.752 "dhchap_digests": [ 00:24:46.752 "sha256", 00:24:46.752 "sha384", 00:24:46.752 "sha512" 00:24:46.752 ], 00:24:46.752 "dhchap_dhgroups": [ 00:24:46.752 "null", 00:24:46.752 "ffdhe2048", 00:24:46.752 "ffdhe3072", 00:24:46.752 "ffdhe4096", 00:24:46.752 "ffdhe6144", 00:24:46.752 "ffdhe8192" 00:24:46.752 ], 00:24:46.752 "rdma_umr_per_io": false 00:24:46.752 } 00:24:46.752 }, 00:24:46.752 { 00:24:46.752 "method": "bdev_nvme_attach_controller", 00:24:46.752 "params": { 00:24:46.752 "name": "TLSTEST", 00:24:46.752 "trtype": "TCP", 00:24:46.752 "adrfam": "IPv4", 00:24:46.753 "traddr": "10.0.0.2", 00:24:46.753 "trsvcid": "4420", 00:24:46.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.753 "prchk_reftag": false, 00:24:46.753 "prchk_guard": false, 00:24:46.753 "ctrlr_loss_timeout_sec": 0, 00:24:46.753 "reconnect_delay_sec": 0, 00:24:46.753 "fast_io_fail_timeout_sec": 0, 00:24:46.753 "psk": "key0", 00:24:46.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:46.753 "hdgst": false, 00:24:46.753 "ddgst": false, 00:24:46.753 "multipath": "multipath" 00:24:46.753 } 00:24:46.753 }, 00:24:46.753 { 00:24:46.753 "method": "bdev_nvme_set_hotplug", 00:24:46.753 "params": { 00:24:46.753 "period_us": 100000, 00:24:46.753 "enable": false 00:24:46.753 } 00:24:46.753 }, 00:24:46.753 { 00:24:46.753 "method": "bdev_wait_for_examine" 00:24:46.753 } 00:24:46.753 ] 00:24:46.753 }, 00:24:46.753 { 00:24:46.753 "subsystem": "nbd", 00:24:46.753 "config": [] 00:24:46.753 } 00:24:46.753 ] 00:24:46.753 }' 00:24:46.753 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.753 00:06:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.012 [2024-12-10 00:06:21.723274] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:47.012 [2024-12-10 00:06:21.723323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391726 ] 00:24:47.012 [2024-12-10 00:06:21.797424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.012 [2024-12-10 00:06:21.836579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.271 [2024-12-10 00:06:21.989048] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:47.838 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.838 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:47.838 00:06:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:47.838 Running I/O for 10 seconds... 00:24:50.157 5255.00 IOPS, 20.53 MiB/s [2024-12-09T23:06:26.029Z] 5150.00 IOPS, 20.12 MiB/s [2024-12-09T23:06:26.965Z] 5154.00 IOPS, 20.13 MiB/s [2024-12-09T23:06:27.902Z] 5060.75 IOPS, 19.77 MiB/s [2024-12-09T23:06:28.840Z] 5070.00 IOPS, 19.80 MiB/s [2024-12-09T23:06:29.777Z] 5128.83 IOPS, 20.03 MiB/s [2024-12-09T23:06:30.716Z] 5115.43 IOPS, 19.98 MiB/s [2024-12-09T23:06:32.096Z] 5130.88 IOPS, 20.04 MiB/s [2024-12-09T23:06:33.033Z] 5112.67 IOPS, 19.97 MiB/s [2024-12-09T23:06:33.033Z] 5082.20 IOPS, 19.85 MiB/s 00:24:58.097 Latency(us) 00:24:58.098 [2024-12-09T23:06:33.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.098 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:58.098 Verification LBA range: start 0x0 length 0x2000 00:24:58.098 TLSTESTn1 : 10.02 5086.96 19.87 0.00 0.00 25125.70 4786.98 31001.38 00:24:58.098 [2024-12-09T23:06:33.034Z] =================================================================================================================== 00:24:58.098 [2024-12-09T23:06:33.034Z] Total : 5086.96 19.87 0.00 0.00 25125.70 4786.98 31001.38 00:24:58.098 { 00:24:58.098 "results": [ 00:24:58.098 { 00:24:58.098 "job": "TLSTESTn1", 00:24:58.098 "core_mask": "0x4", 00:24:58.098 "workload": "verify", 00:24:58.098 "status": "finished", 00:24:58.098 "verify_range": { 00:24:58.098 "start": 0, 00:24:58.098 "length": 8192 00:24:58.098 }, 00:24:58.098 "queue_depth": 128, 00:24:58.098 "io_size": 4096, 00:24:58.098 "runtime": 10.015607, 00:24:58.098 "iops": 5086.96078031017, 00:24:58.098 "mibps": 19.8709405480866, 00:24:58.098 "io_failed": 0, 00:24:58.098 "io_timeout": 0, 00:24:58.098 "avg_latency_us": 25125.701154453687, 00:24:58.098 "min_latency_us": 4786.977391304348, 00:24:58.098 "max_latency_us": 31001.377391304348 00:24:58.098 } 00:24:58.098 ], 00:24:58.098 "core_count": 1 00:24:58.098 } 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 391726 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 391726 ']' 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 391726 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391726 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391726' 00:24:58.098 killing process with pid 391726 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 391726 00:24:58.098 Received shutdown signal, test time was about 10.000000 seconds 00:24:58.098 00:24:58.098 Latency(us) 00:24:58.098 [2024-12-09T23:06:33.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.098 [2024-12-09T23:06:33.034Z] =================================================================================================================== 00:24:58.098 [2024-12-09T23:06:33.034Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 391726 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 391480 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 391480 ']' 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 391480 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 391480 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 391480' 00:24:58.098 killing process with pid 391480 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 391480 00:24:58.098 00:06:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 391480 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=393568 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 393568 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 393568 ']' 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.357 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.357 [2024-12-10 00:06:33.206999] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:58.357 [2024-12-10 00:06:33.207046] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.357 [2024-12-10 00:06:33.284403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.616 [2024-12-10 00:06:33.324397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.616 [2024-12-10 00:06:33.324431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.616 [2024-12-10 00:06:33.324438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.616 [2024-12-10 00:06:33.324444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.616 [2024-12-10 00:06:33.324450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.616 [2024-12-10 00:06:33.324969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.616 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.616 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:58.616 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:58.616 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.616 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.616 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.616 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.9m3ydLOiUq 00:24:58.616 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.9m3ydLOiUq 00:24:58.616 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:58.879 [2024-12-10 00:06:33.621721] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.879 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:59.139 00:06:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:59.139 [2024-12-10 00:06:34.038794] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:59.139 [2024-12-10 00:06:34.039006] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.139 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:59.398 malloc0 00:24:59.398 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:59.657 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.9m3ydLOiUq 00:24:59.916 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:00.176 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:00.176 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=393827 00:25:00.176 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:00.176 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 393827 /var/tmp/bdevperf.sock 00:25:00.176 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 393827 ']' 00:25:00.176 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.176 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.176 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.176 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.176 00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.176 [2024-12-10 00:06:34.911769] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:00.176 [2024-12-10 00:06:34.911819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393827 ] 00:25:00.176 [2024-12-10 00:06:34.987857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.176 [2024-12-10 00:06:35.027864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.434 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.434 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:00.434 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9m3ydLOiUq 00:25:00.435 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:00.693 [2024-12-10 00:06:35.520939] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:00.693 nvme0n1 00:25:00.693 00:06:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:00.953 Running I/O for 1 seconds... 00:25:01.890 5117.00 IOPS, 19.99 MiB/s 00:25:01.890 Latency(us) 00:25:01.890 [2024-12-09T23:06:36.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.890 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:01.890 Verification LBA range: start 0x0 length 0x2000 00:25:01.890 nvme0n1 : 1.02 5133.11 20.05 0.00 0.00 24696.55 5499.33 28038.01 00:25:01.890 [2024-12-09T23:06:36.826Z] =================================================================================================================== 00:25:01.890 [2024-12-09T23:06:36.826Z] Total : 5133.11 20.05 0.00 0.00 24696.55 5499.33 28038.01 00:25:01.890 { 00:25:01.890 "results": [ 00:25:01.890 { 00:25:01.890 "job": "nvme0n1", 00:25:01.890 "core_mask": "0x2", 00:25:01.890 "workload": "verify", 00:25:01.890 "status": "finished", 00:25:01.890 "verify_range": { 00:25:01.890 "start": 0, 00:25:01.890 "length": 8192 00:25:01.890 }, 00:25:01.890 "queue_depth": 128, 00:25:01.890 "io_size": 4096, 00:25:01.890 "runtime": 1.021798, 00:25:01.890 "iops": 5133.108500897438, 00:25:01.890 "mibps": 20.051205081630616, 00:25:01.890 "io_failed": 0, 00:25:01.890 "io_timeout": 0, 00:25:01.890 "avg_latency_us": 24696.554513698346, 00:25:01.890 "min_latency_us": 5499.325217391304, 00:25:01.890 "max_latency_us": 28038.01043478261 00:25:01.890 } 00:25:01.890 ], 00:25:01.890 "core_count": 1 00:25:01.890 } 00:25:01.890 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 393827 00:25:01.890 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 393827 ']' 00:25:01.890 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 393827 00:25:01.890 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:01.890 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.890 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393827 00:25:01.890 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:01.890 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:01.890 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393827' 00:25:01.890 killing process with pid 393827 00:25:01.890 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 393827 00:25:01.890 Received shutdown signal, test time was about 1.000000 seconds 00:25:01.890 00:25:01.890 Latency(us) 00:25:01.890 [2024-12-09T23:06:36.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.890 [2024-12-09T23:06:36.826Z] =================================================================================================================== 00:25:01.890 [2024-12-09T23:06:36.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.890 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 393827 00:25:02.150 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 393568 00:25:02.150 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 393568 ']' 00:25:02.150 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 393568 00:25:02.150 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:02.150 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.150 00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393568 00:25:02.150 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:02.150 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:02.150 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393568' 00:25:02.150 killing process with pid 393568 00:25:02.150 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 393568 00:25:02.150 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 393568 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=394292 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 394292 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 394292 ']' 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.410 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.410 [2024-12-10 00:06:37.252218] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:02.410 [2024-12-10 00:06:37.252267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.410 [2024-12-10 00:06:37.331438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.670 [2024-12-10 00:06:37.369338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.670 [2024-12-10 00:06:37.369372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.670 [2024-12-10 00:06:37.369380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.670 [2024-12-10 00:06:37.369387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.670 [2024-12-10 00:06:37.369392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.670 [2024-12-10 00:06:37.369879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.670 [2024-12-10 00:06:37.517993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.670 malloc0 00:25:02.670 [2024-12-10 00:06:37.546174] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:02.670 [2024-12-10 00:06:37.546373] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=394320 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 394320 /var/tmp/bdevperf.sock 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 394320 ']' 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.670 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.929 [2024-12-10 00:06:37.623963] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:02.929 [2024-12-10 00:06:37.624004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394320 ] 00:25:02.929 [2024-12-10 00:06:37.699330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.929 [2024-12-10 00:06:37.740505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.929 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.929 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:02.929 00:06:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9m3ydLOiUq 00:25:03.188 00:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:03.447 [2024-12-10 00:06:38.218127] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.447 nvme0n1 00:25:03.447 00:06:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.706 Running I/O for 1 seconds... 00:25:04.646 4574.00 IOPS, 17.87 MiB/s 00:25:04.646 Latency(us) 00:25:04.646 [2024-12-09T23:06:39.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.646 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:04.646 Verification LBA range: start 0x0 length 0x2000 00:25:04.646 nvme0n1 : 1.01 4644.95 18.14 0.00 0.00 27373.36 5185.89 33508.84 00:25:04.646 [2024-12-09T23:06:39.582Z] =================================================================================================================== 00:25:04.646 [2024-12-09T23:06:39.582Z] Total : 4644.95 18.14 0.00 0.00 27373.36 5185.89 33508.84 00:25:04.646 { 00:25:04.646 "results": [ 00:25:04.646 { 00:25:04.646 "job": "nvme0n1", 00:25:04.646 "core_mask": "0x2", 00:25:04.646 "workload": "verify", 00:25:04.646 "status": "finished", 00:25:04.646 "verify_range": { 00:25:04.646 "start": 0, 00:25:04.646 "length": 8192 00:25:04.646 }, 00:25:04.646 "queue_depth": 128, 00:25:04.646 "io_size": 4096, 00:25:04.646 "runtime": 1.012498, 00:25:04.646 "iops": 4644.9474468097715, 00:25:04.646 "mibps": 18.14432596410067, 00:25:04.646 "io_failed": 0, 00:25:04.646 "io_timeout": 0, 00:25:04.646 "avg_latency_us": 27373.362497573245, 00:25:04.646 "min_latency_us": 5185.892173913044, 00:25:04.646 "max_latency_us": 33508.84173913043 00:25:04.646 } 00:25:04.646 ], 00:25:04.646 "core_count": 1 00:25:04.646 } 00:25:04.646 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:04.646 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.646 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.646 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.646 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:04.647 "subsystems": [ 00:25:04.647 { 00:25:04.647 "subsystem": "keyring", 00:25:04.647 "config": [ 00:25:04.647 { 00:25:04.647 "method": "keyring_file_add_key", 00:25:04.647 "params": { 00:25:04.647 "name": "key0", 00:25:04.647 "path": "/tmp/tmp.9m3ydLOiUq" 00:25:04.647 } 00:25:04.647 } 00:25:04.647 ] 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "subsystem": "iobuf", 00:25:04.647 "config": [ 00:25:04.647 { 00:25:04.647 "method": "iobuf_set_options", 00:25:04.647 "params": { 00:25:04.647 "small_pool_count": 8192, 00:25:04.647 "large_pool_count": 1024, 00:25:04.647 "small_bufsize": 8192, 00:25:04.647 "large_bufsize": 135168, 00:25:04.647 "enable_numa": false 00:25:04.647 } 00:25:04.647 } 00:25:04.647 ] 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "subsystem": "sock", 00:25:04.647 "config": [ 00:25:04.647 { 00:25:04.647 "method": "sock_set_default_impl", 00:25:04.647 "params": { 00:25:04.647 "impl_name": "posix" 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "sock_impl_set_options", 00:25:04.647 "params": { 00:25:04.647 "impl_name": "ssl", 00:25:04.647 "recv_buf_size": 4096, 00:25:04.647 "send_buf_size": 4096, 00:25:04.647 "enable_recv_pipe": true, 00:25:04.647 "enable_quickack": false, 00:25:04.647 "enable_placement_id": 0, 00:25:04.647 "enable_zerocopy_send_server": true, 00:25:04.647 "enable_zerocopy_send_client": false, 00:25:04.647 "zerocopy_threshold": 0, 00:25:04.647 "tls_version": 0, 00:25:04.647 "enable_ktls": false 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "sock_impl_set_options", 00:25:04.647 "params": { 00:25:04.647 "impl_name": "posix", 00:25:04.647 "recv_buf_size": 2097152, 00:25:04.647 "send_buf_size": 2097152, 00:25:04.647 "enable_recv_pipe": true, 00:25:04.647 "enable_quickack": false, 00:25:04.647 "enable_placement_id": 0, 00:25:04.647 "enable_zerocopy_send_server": true, 00:25:04.647 "enable_zerocopy_send_client": false, 00:25:04.647 "zerocopy_threshold": 0, 00:25:04.647 "tls_version": 0, 00:25:04.647 "enable_ktls": false 00:25:04.647 } 00:25:04.647 } 00:25:04.647 ] 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "subsystem": "vmd", 00:25:04.647 "config": [] 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "subsystem": "accel", 00:25:04.647 "config": [ 00:25:04.647 { 00:25:04.647 "method": "accel_set_options", 00:25:04.647 "params": { 00:25:04.647 "small_cache_size": 128, 00:25:04.647 "large_cache_size": 16, 00:25:04.647 "task_count": 2048, 00:25:04.647 "sequence_count": 2048, 00:25:04.647 "buf_count": 2048 00:25:04.647 } 00:25:04.647 } 00:25:04.647 ] 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "subsystem": "bdev", 00:25:04.647 "config": [ 00:25:04.647 { 00:25:04.647 "method": "bdev_set_options", 00:25:04.647 "params": { 00:25:04.647 "bdev_io_pool_size": 65535, 00:25:04.647 "bdev_io_cache_size": 256, 00:25:04.647 "bdev_auto_examine": true, 00:25:04.647 "iobuf_small_cache_size": 128, 00:25:04.647 "iobuf_large_cache_size": 16 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "bdev_raid_set_options", 00:25:04.647 "params": { 00:25:04.647 "process_window_size_kb": 1024, 00:25:04.647 "process_max_bandwidth_mb_sec": 0 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "bdev_iscsi_set_options", 00:25:04.647 "params": { 00:25:04.647 "timeout_sec": 30 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "bdev_nvme_set_options", 00:25:04.647 "params": { 00:25:04.647 "action_on_timeout": "none", 00:25:04.647 "timeout_us": 0, 00:25:04.647 "timeout_admin_us": 0, 00:25:04.647 "keep_alive_timeout_ms": 10000, 00:25:04.647 "arbitration_burst": 0, 00:25:04.647 "low_priority_weight": 0, 00:25:04.647 "medium_priority_weight": 0, 00:25:04.647 "high_priority_weight": 0, 00:25:04.647 "nvme_adminq_poll_period_us": 10000, 00:25:04.647 "nvme_ioq_poll_period_us": 0, 00:25:04.647 "io_queue_requests": 0, 00:25:04.647 "delay_cmd_submit": true, 00:25:04.647 "transport_retry_count": 4, 00:25:04.647 "bdev_retry_count": 3, 00:25:04.647 "transport_ack_timeout": 0, 00:25:04.647 "ctrlr_loss_timeout_sec": 0, 00:25:04.647 "reconnect_delay_sec": 0, 00:25:04.647 "fast_io_fail_timeout_sec": 0, 00:25:04.647 "disable_auto_failback": false, 00:25:04.647 "generate_uuids": false, 00:25:04.647 "transport_tos": 0, 00:25:04.647 "nvme_error_stat": false, 00:25:04.647 "rdma_srq_size": 0, 00:25:04.647 "io_path_stat": false, 00:25:04.647 "allow_accel_sequence": false, 00:25:04.647 "rdma_max_cq_size": 0, 00:25:04.647 "rdma_cm_event_timeout_ms": 0, 00:25:04.647 "dhchap_digests": [ 00:25:04.647 "sha256", 00:25:04.647 "sha384", 00:25:04.647 "sha512" 00:25:04.647 ], 00:25:04.647 "dhchap_dhgroups": [ 00:25:04.647 "null", 00:25:04.647 "ffdhe2048", 00:25:04.647 "ffdhe3072", 00:25:04.647 "ffdhe4096", 00:25:04.647 "ffdhe6144", 00:25:04.647 "ffdhe8192" 00:25:04.647 ], 00:25:04.647 "rdma_umr_per_io": false 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "bdev_nvme_set_hotplug", 00:25:04.647 "params": { 00:25:04.647 "period_us": 100000, 00:25:04.647 "enable": false 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "bdev_malloc_create", 00:25:04.647 "params": { 00:25:04.647 "name": "malloc0", 00:25:04.647 "num_blocks": 8192, 00:25:04.647 "block_size": 4096, 00:25:04.647 "physical_block_size": 4096, 00:25:04.647 "uuid": "451c7221-5b59-45d5-ba0a-5453059119ff", 00:25:04.647 "optimal_io_boundary": 0, 00:25:04.647 "md_size": 0, 00:25:04.647 "dif_type": 0, 00:25:04.647 "dif_is_head_of_md": false, 00:25:04.647 "dif_pi_format": 0 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "bdev_wait_for_examine" 00:25:04.647 } 00:25:04.647 ] 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "subsystem": "nbd", 00:25:04.647 "config": [] 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "subsystem": "scheduler", 00:25:04.647 "config": [ 00:25:04.647 { 00:25:04.647 "method": "framework_set_scheduler", 00:25:04.647 "params": { 00:25:04.647 "name": "static" 00:25:04.647 } 00:25:04.647 } 00:25:04.647 ] 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "subsystem": "nvmf", 00:25:04.647 "config": [ 00:25:04.647 { 00:25:04.647 "method": "nvmf_set_config", 00:25:04.647 "params": { 00:25:04.647 "discovery_filter": "match_any", 00:25:04.647 "admin_cmd_passthru": { 00:25:04.647 "identify_ctrlr": false 00:25:04.647 }, 00:25:04.647 "dhchap_digests": [ 00:25:04.647 "sha256", 00:25:04.647 "sha384", 00:25:04.647 "sha512" 00:25:04.647 ], 00:25:04.647 "dhchap_dhgroups": [ 00:25:04.647 "null", 00:25:04.647 "ffdhe2048", 00:25:04.647 "ffdhe3072", 00:25:04.647 "ffdhe4096", 00:25:04.647 "ffdhe6144", 00:25:04.647 "ffdhe8192" 00:25:04.647 ] 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "nvmf_set_max_subsystems", 00:25:04.647 "params": { 00:25:04.647 "max_subsystems": 1024 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "nvmf_set_crdt", 00:25:04.647 "params": { 00:25:04.647 "crdt1": 0, 00:25:04.647 "crdt2": 0, 00:25:04.647 "crdt3": 0 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "nvmf_create_transport", 00:25:04.647 "params": { 00:25:04.647 "trtype": "TCP", 00:25:04.647 "max_queue_depth": 128, 00:25:04.647 "max_io_qpairs_per_ctrlr": 127, 00:25:04.647 "in_capsule_data_size": 4096, 00:25:04.647 "max_io_size": 131072, 00:25:04.647 "io_unit_size": 131072, 00:25:04.647 "max_aq_depth": 128, 00:25:04.647 "num_shared_buffers": 511, 00:25:04.647 "buf_cache_size": 4294967295, 00:25:04.647 "dif_insert_or_strip": false, 00:25:04.647 "zcopy": false, 00:25:04.647 "c2h_success": false, 00:25:04.647 "sock_priority": 0, 00:25:04.647 "abort_timeout_sec": 1, 00:25:04.647 "ack_timeout": 0, 00:25:04.647 "data_wr_pool_size": 0 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "nvmf_create_subsystem", 00:25:04.647 "params": { 00:25:04.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.647 "allow_any_host": false, 00:25:04.647 "serial_number": "00000000000000000000", 00:25:04.647 "model_number": "SPDK bdev Controller", 00:25:04.647 "max_namespaces": 32, 00:25:04.647 "min_cntlid": 1, 00:25:04.647 "max_cntlid": 65519, 00:25:04.647 "ana_reporting": false 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "nvmf_subsystem_add_host", 00:25:04.647 "params": { 00:25:04.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.647 "host": "nqn.2016-06.io.spdk:host1", 00:25:04.647 "psk": "key0" 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "nvmf_subsystem_add_ns", 00:25:04.647 "params": { 00:25:04.647 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.647 "namespace": { 00:25:04.647 "nsid": 1, 00:25:04.647 "bdev_name": "malloc0", 00:25:04.647 "nguid": "451C72215B5945D5BA0A5453059119FF", 00:25:04.647 "uuid": "451c7221-5b59-45d5-ba0a-5453059119ff", 00:25:04.647 "no_auto_visible": false 00:25:04.647 } 00:25:04.647 } 00:25:04.647 }, 00:25:04.647 { 00:25:04.647 "method": "nvmf_subsystem_add_listener", 00:25:04.647 "params": { 00:25:04.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.648 "listen_address": { 00:25:04.648 "trtype": "TCP", 00:25:04.648 "adrfam": "IPv4", 00:25:04.648 "traddr": "10.0.0.2", 00:25:04.648 "trsvcid": "4420" 00:25:04.648 }, 00:25:04.648 "secure_channel": false, 00:25:04.648 "sock_impl": "ssl" 00:25:04.648 } 00:25:04.648 } 00:25:04.648 ] 00:25:04.648 } 00:25:04.648 ] 00:25:04.648 }' 00:25:04.648 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:04.907 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:04.907 "subsystems": [ 00:25:04.907 { 00:25:04.907 "subsystem": "keyring", 00:25:04.907 "config": [ 00:25:04.907 { 00:25:04.907 "method": "keyring_file_add_key", 00:25:04.907 "params": { 00:25:04.907 "name": "key0", 00:25:04.907 "path": "/tmp/tmp.9m3ydLOiUq" 00:25:04.907 } 00:25:04.907 } 00:25:04.907 ] 00:25:04.907 }, 00:25:04.907 { 00:25:04.907 "subsystem": "iobuf", 00:25:04.907 "config": [ 00:25:04.907 { 00:25:04.907 "method": "iobuf_set_options", 00:25:04.907 "params": { 00:25:04.907 "small_pool_count": 8192, 00:25:04.907 "large_pool_count": 1024, 00:25:04.907 "small_bufsize": 8192, 00:25:04.907 "large_bufsize": 135168, 00:25:04.907 "enable_numa": false 00:25:04.907 } 00:25:04.907 } 00:25:04.907 ] 00:25:04.907 }, 00:25:04.907 { 00:25:04.907 "subsystem": "sock", 00:25:04.907 "config": [ 00:25:04.907 { 00:25:04.907 "method": "sock_set_default_impl", 00:25:04.907 "params": { 00:25:04.907 "impl_name": "posix" 00:25:04.907 } 00:25:04.907 }, 00:25:04.907 { 00:25:04.907 "method": "sock_impl_set_options", 00:25:04.907 "params": { 00:25:04.907 "impl_name": "ssl", 00:25:04.907 "recv_buf_size": 4096, 00:25:04.907 "send_buf_size": 4096, 00:25:04.907 "enable_recv_pipe": true, 00:25:04.907 "enable_quickack": false, 00:25:04.907 "enable_placement_id": 0, 00:25:04.907 "enable_zerocopy_send_server": true, 00:25:04.907 "enable_zerocopy_send_client": false, 00:25:04.907 "zerocopy_threshold": 0, 00:25:04.907 "tls_version": 0, 00:25:04.907 "enable_ktls": false 00:25:04.907 } 00:25:04.907 }, 00:25:04.907 { 00:25:04.907 "method": "sock_impl_set_options", 00:25:04.907 "params": { 00:25:04.907 "impl_name": "posix", 00:25:04.907 "recv_buf_size": 2097152, 00:25:04.907 "send_buf_size": 2097152, 00:25:04.907 "enable_recv_pipe": true, 00:25:04.907 "enable_quickack": false, 00:25:04.907 "enable_placement_id": 0, 00:25:04.907 "enable_zerocopy_send_server": true, 00:25:04.907 "enable_zerocopy_send_client": false, 00:25:04.907 "zerocopy_threshold": 0, 00:25:04.907 "tls_version": 0, 00:25:04.907 "enable_ktls": false 00:25:04.907 } 00:25:04.907 } 00:25:04.907 ] 00:25:04.907 }, 00:25:04.907 { 00:25:04.907 "subsystem": "vmd", 00:25:04.907 "config": [] 00:25:04.907 }, 00:25:04.907 { 00:25:04.907 "subsystem": "accel", 00:25:04.907 "config": [ 00:25:04.907 { 00:25:04.907 "method": "accel_set_options", 00:25:04.907 "params": { 00:25:04.907 "small_cache_size": 128, 00:25:04.907 "large_cache_size": 16, 00:25:04.907 "task_count": 2048, 00:25:04.907 "sequence_count": 2048, 00:25:04.907 "buf_count": 2048 00:25:04.907 } 00:25:04.907 } 00:25:04.907 ] 00:25:04.907 }, 00:25:04.907 { 00:25:04.907 "subsystem": "bdev", 00:25:04.907 "config": [ 00:25:04.907 { 00:25:04.907 "method": "bdev_set_options", 00:25:04.907 "params": { 00:25:04.907 "bdev_io_pool_size": 65535, 00:25:04.907 "bdev_io_cache_size": 256, 00:25:04.907 "bdev_auto_examine": true, 00:25:04.907 "iobuf_small_cache_size": 128, 00:25:04.907 "iobuf_large_cache_size": 16 00:25:04.907 } 00:25:04.907 }, 00:25:04.907 { 00:25:04.907 "method": "bdev_raid_set_options", 00:25:04.907 "params": { 00:25:04.907 "process_window_size_kb": 1024, 00:25:04.907 "process_max_bandwidth_mb_sec": 0 00:25:04.907 } 00:25:04.907 }, 00:25:04.907 { 00:25:04.907 "method": "bdev_iscsi_set_options", 00:25:04.907 "params": { 00:25:04.907 "timeout_sec": 30 00:25:04.907 } 00:25:04.907 }, 00:25:04.907 { 00:25:04.907 "method": "bdev_nvme_set_options", 00:25:04.907 "params": { 00:25:04.907 "action_on_timeout": "none", 00:25:04.907 "timeout_us": 0, 00:25:04.907 "timeout_admin_us": 0, 00:25:04.907 "keep_alive_timeout_ms": 10000, 00:25:04.907 "arbitration_burst": 0, 00:25:04.907 "low_priority_weight": 0, 00:25:04.907 "medium_priority_weight": 0, 00:25:04.907 "high_priority_weight": 0, 00:25:04.907 "nvme_adminq_poll_period_us": 10000, 00:25:04.907 "nvme_ioq_poll_period_us": 0, 00:25:04.907 "io_queue_requests": 512, 00:25:04.907 "delay_cmd_submit": true, 00:25:04.907 "transport_retry_count": 4, 00:25:04.907 "bdev_retry_count": 3, 00:25:04.907 "transport_ack_timeout": 0, 00:25:04.907 "ctrlr_loss_timeout_sec": 0, 00:25:04.907 "reconnect_delay_sec": 0, 00:25:04.907 "fast_io_fail_timeout_sec": 0, 00:25:04.907 "disable_auto_failback": false, 00:25:04.907 "generate_uuids": false, 00:25:04.907 "transport_tos": 0, 00:25:04.907 "nvme_error_stat": false, 00:25:04.907 "rdma_srq_size": 0, 00:25:04.907 "io_path_stat": false, 00:25:04.907 "allow_accel_sequence": false, 00:25:04.907 "rdma_max_cq_size": 0, 00:25:04.907 "rdma_cm_event_timeout_ms": 0, 00:25:04.907 "dhchap_digests": [ 00:25:04.907 "sha256", 00:25:04.907 "sha384", 00:25:04.908 "sha512" 00:25:04.908 ], 00:25:04.908 "dhchap_dhgroups": [ 00:25:04.908 "null", 00:25:04.908 "ffdhe2048", 00:25:04.908 "ffdhe3072", 00:25:04.908 "ffdhe4096", 00:25:04.908 "ffdhe6144", 00:25:04.908 "ffdhe8192" 00:25:04.908 ], 00:25:04.908 "rdma_umr_per_io": false 00:25:04.908 } 00:25:04.908 }, 00:25:04.908 { 00:25:04.908 "method": "bdev_nvme_attach_controller", 00:25:04.908 "params": { 00:25:04.908 "name": "nvme0", 00:25:04.908 "trtype": "TCP", 00:25:04.908 "adrfam": "IPv4", 00:25:04.908 "traddr": "10.0.0.2", 00:25:04.908 "trsvcid": "4420", 00:25:04.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.908 "prchk_reftag": false, 00:25:04.908 "prchk_guard": false, 00:25:04.908 "ctrlr_loss_timeout_sec": 0, 00:25:04.908 "reconnect_delay_sec": 0, 00:25:04.908 "fast_io_fail_timeout_sec": 0, 00:25:04.908 "psk": "key0", 00:25:04.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:04.908 "hdgst": false, 00:25:04.908 "ddgst": false, 00:25:04.908 "multipath": "multipath" 00:25:04.908 } 00:25:04.908 }, 00:25:04.908 { 00:25:04.908 "method": "bdev_nvme_set_hotplug", 00:25:04.908 "params": { 00:25:04.908 "period_us": 100000, 00:25:04.908 "enable": false 00:25:04.908 } 00:25:04.908 }, 00:25:04.908 { 00:25:04.908 "method": "bdev_enable_histogram", 00:25:04.908 "params": { 00:25:04.908 "name": "nvme0n1", 00:25:04.908 "enable": true 00:25:04.908 } 00:25:04.908 }, 00:25:04.908 { 00:25:04.908 "method": "bdev_wait_for_examine" 00:25:04.908 } 00:25:04.908 ] 00:25:04.908 }, 00:25:04.908 { 00:25:04.908 "subsystem": "nbd", 00:25:04.908 "config": [] 00:25:04.908 } 00:25:04.908 ] 00:25:04.908 }' 00:25:04.908 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 394320 00:25:04.908 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 394320 ']' 00:25:04.908 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 394320 00:25:04.908 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:04.908 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.908 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394320 00:25:05.167 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:05.167 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:05.167 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394320' 00:25:05.167 killing process with pid 394320 00:25:05.167 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 394320 00:25:05.167 Received shutdown signal, test time was about 1.000000 seconds 00:25:05.167 00:25:05.167 Latency(us) 00:25:05.167 [2024-12-09T23:06:40.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.167 [2024-12-09T23:06:40.103Z] =================================================================================================================== 00:25:05.167 [2024-12-09T23:06:40.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.167 00:06:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 394320 00:25:05.167 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 394292 00:25:05.167 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 394292 ']' 00:25:05.167 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 394292 00:25:05.167 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:05.167 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.167 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394292 00:25:05.167 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:05.167 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:05.167 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394292' 00:25:05.167 killing process with pid 394292 00:25:05.167 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 394292 00:25:05.167 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 394292 00:25:05.426 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:05.426 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.426 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.426 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:05.426 "subsystems": [ 00:25:05.426 { 00:25:05.426 "subsystem": "keyring", 00:25:05.426 "config": [ 00:25:05.426 { 00:25:05.426 "method": "keyring_file_add_key", 00:25:05.426 "params": { 00:25:05.426 "name": "key0", 00:25:05.426 "path": "/tmp/tmp.9m3ydLOiUq" 00:25:05.426 } 00:25:05.426 } 00:25:05.426 ] 00:25:05.426 }, 00:25:05.426 { 00:25:05.426 "subsystem": "iobuf", 00:25:05.426 "config": [ 00:25:05.426 { 00:25:05.426 "method": "iobuf_set_options", 00:25:05.426 "params": { 00:25:05.426 "small_pool_count": 8192, 00:25:05.426 "large_pool_count": 1024, 00:25:05.426 "small_bufsize": 8192, 00:25:05.426 "large_bufsize": 135168, 00:25:05.426 "enable_numa": false 00:25:05.426 } 00:25:05.426 } 00:25:05.426 ] 00:25:05.426 }, 00:25:05.426 { 00:25:05.426 "subsystem": "sock", 00:25:05.426 "config": [ 00:25:05.426 { 00:25:05.426 "method": "sock_set_default_impl", 00:25:05.426 "params": { 00:25:05.426 "impl_name": "posix" 00:25:05.426 } 00:25:05.426 }, 00:25:05.426 { 00:25:05.426 "method": "sock_impl_set_options", 00:25:05.426 "params": { 00:25:05.426 "impl_name": "ssl", 00:25:05.426 "recv_buf_size": 4096, 00:25:05.426 "send_buf_size": 4096, 00:25:05.426 "enable_recv_pipe": true, 00:25:05.426 "enable_quickack": false, 00:25:05.426 "enable_placement_id": 0, 00:25:05.426 "enable_zerocopy_send_server": true, 00:25:05.426 "enable_zerocopy_send_client": false, 00:25:05.426 "zerocopy_threshold": 0, 00:25:05.426 "tls_version": 0, 00:25:05.426 "enable_ktls": false 00:25:05.426 } 00:25:05.426 }, 00:25:05.426 { 00:25:05.426 "method": "sock_impl_set_options", 00:25:05.426 "params": { 00:25:05.426 "impl_name": "posix", 00:25:05.426 "recv_buf_size": 2097152, 00:25:05.426 "send_buf_size": 2097152, 00:25:05.426 "enable_recv_pipe": true, 00:25:05.426 "enable_quickack": false, 00:25:05.426 "enable_placement_id": 0, 00:25:05.426 "enable_zerocopy_send_server": true, 00:25:05.426 "enable_zerocopy_send_client": false, 00:25:05.426 "zerocopy_threshold": 0, 00:25:05.426 "tls_version": 0, 00:25:05.426 "enable_ktls": false 00:25:05.426 } 00:25:05.426 } 00:25:05.426 ] 00:25:05.426 }, 00:25:05.427 { 00:25:05.427 "subsystem": "vmd", 00:25:05.427 "config": [] 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "subsystem": "accel", 00:25:05.427 "config": [ 00:25:05.427 { 00:25:05.427 "method": "accel_set_options", 00:25:05.427 "params": { 00:25:05.427 "small_cache_size": 128, 00:25:05.427 "large_cache_size": 16, 00:25:05.427 "task_count": 2048, 00:25:05.427 "sequence_count": 2048, 00:25:05.427 "buf_count": 2048 00:25:05.427 } 00:25:05.427 } 00:25:05.427 ] 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "subsystem": "bdev", 00:25:05.427 "config": [ 00:25:05.427 { 00:25:05.427 "method": "bdev_set_options", 00:25:05.427 "params": { 00:25:05.427 "bdev_io_pool_size": 65535, 00:25:05.427 "bdev_io_cache_size": 256, 00:25:05.427 "bdev_auto_examine": true, 00:25:05.427 "iobuf_small_cache_size": 128, 00:25:05.427 "iobuf_large_cache_size": 16 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "bdev_raid_set_options", 00:25:05.427 "params": { 00:25:05.427 "process_window_size_kb": 1024, 00:25:05.427 "process_max_bandwidth_mb_sec": 0 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "bdev_iscsi_set_options", 00:25:05.427 "params": { 00:25:05.427 "timeout_sec": 30 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "bdev_nvme_set_options", 00:25:05.427 "params": { 00:25:05.427 "action_on_timeout": "none", 00:25:05.427 "timeout_us": 0, 00:25:05.427 "timeout_admin_us": 0, 00:25:05.427 "keep_alive_timeout_ms": 10000, 00:25:05.427 "arbitration_burst": 0, 00:25:05.427 "low_priority_weight": 0, 00:25:05.427 "medium_priority_weight": 0, 00:25:05.427 "high_priority_weight": 0, 00:25:05.427 "nvme_adminq_poll_period_us": 10000, 00:25:05.427 "nvme_ioq_poll_period_us": 0, 00:25:05.427 "io_queue_requests": 0, 00:25:05.427 "delay_cmd_submit": true, 00:25:05.427 "transport_retry_count": 4, 00:25:05.427 "bdev_retry_count": 3, 00:25:05.427 "transport_ack_timeout": 0, 00:25:05.427 "ctrlr_loss_timeout_sec": 0, 00:25:05.427 "reconnect_delay_sec": 0, 00:25:05.427 "fast_io_fail_timeout_sec": 0, 00:25:05.427 "disable_auto_failback": false, 00:25:05.427 "generate_uuids": false, 00:25:05.427 "transport_tos": 0, 00:25:05.427 "nvme_error_stat": false, 00:25:05.427 "rdma_srq_size": 0, 00:25:05.427 "io_path_stat": false, 00:25:05.427 "allow_accel_sequence": false, 00:25:05.427 "rdma_max_cq_size": 0, 00:25:05.427 "rdma_cm_event_timeout_ms": 0, 00:25:05.427 "dhchap_digests": [ 00:25:05.427 "sha256", 00:25:05.427 "sha384", 00:25:05.427 "sha512" 00:25:05.427 ], 00:25:05.427 "dhchap_dhgroups": [ 00:25:05.427 "null", 00:25:05.427 "ffdhe2048", 00:25:05.427 "ffdhe3072", 00:25:05.427 "ffdhe4096", 00:25:05.427 "ffdhe6144", 00:25:05.427 "ffdhe8192" 00:25:05.427 ], 00:25:05.427 "rdma_umr_per_io": false 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "bdev_nvme_set_hotplug", 00:25:05.427 "params": { 00:25:05.427 "period_us": 100000, 00:25:05.427 "enable": false 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "bdev_malloc_create", 00:25:05.427 "params": { 00:25:05.427 "name": "malloc0", 00:25:05.427 "num_blocks": 8192, 00:25:05.427 "block_size": 4096, 00:25:05.427 "physical_block_size": 4096, 00:25:05.427 "uuid": "451c7221-5b59-45d5-ba0a-5453059119ff", 00:25:05.427 "optimal_io_boundary": 0, 00:25:05.427 "md_size": 0, 00:25:05.427 "dif_type": 0, 00:25:05.427 "dif_is_head_of_md": false, 00:25:05.427 "dif_pi_format": 0 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "bdev_wait_for_examine" 00:25:05.427 } 00:25:05.427 ] 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "subsystem": "nbd", 00:25:05.427 "config": [] 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "subsystem": "scheduler", 00:25:05.427 "config": [ 00:25:05.427 { 00:25:05.427 "method": "framework_set_scheduler", 00:25:05.427 "params": { 00:25:05.427 "name": "static" 00:25:05.427 } 00:25:05.427 } 00:25:05.427 ] 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "subsystem": "nvmf", 00:25:05.427 "config": [ 00:25:05.427 { 00:25:05.427 "method": "nvmf_set_config", 00:25:05.427 "params": { 00:25:05.427 "discovery_filter": "match_any", 00:25:05.427 "admin_cmd_passthru": { 00:25:05.427 "identify_ctrlr": false 00:25:05.427 }, 00:25:05.427 "dhchap_digests": [ 00:25:05.427 "sha256", 00:25:05.427 "sha384", 00:25:05.427 "sha512" 00:25:05.427 ], 00:25:05.427 "dhchap_dhgroups": [ 00:25:05.427 "null", 00:25:05.427 "ffdhe2048", 00:25:05.427 "ffdhe3072", 00:25:05.427 "ffdhe4096", 00:25:05.427 "ffdhe6144", 00:25:05.427 "ffdhe8192" 00:25:05.427 ] 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "nvmf_set_max_subsystems", 00:25:05.427 "params": { 00:25:05.427 "max_subsystems": 1024 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "nvmf_set_crdt", 00:25:05.427 "params": { 00:25:05.427 "crdt1": 0, 00:25:05.427 "crdt2": 0, 00:25:05.427 "crdt3": 0 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "nvmf_create_transport", 00:25:05.427 "params": { 00:25:05.427 "trtype": "TCP", 00:25:05.427 "max_queue_depth": 128, 00:25:05.427 "max_io_qpairs_per_ctrlr": 127, 00:25:05.427 "in_capsule_data_size": 4096, 00:25:05.427 "max_io_size": 131072, 00:25:05.427 "io_unit_size": 131072, 00:25:05.427 "max_aq_depth": 128, 00:25:05.427 "num_shared_buffers": 511, 00:25:05.427 "buf_cache_size": 4294967295, 00:25:05.427 "dif_insert_or_strip": false, 00:25:05.427 "zcopy": false, 00:25:05.427 "c2h_success": false, 00:25:05.427 "sock_priority": 0, 00:25:05.427 "abort_timeout_sec": 1, 00:25:05.427 "ack_timeout": 0, 00:25:05.427 "data_wr_pool_size": 0 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "nvmf_create_subsystem", 00:25:05.427 "params": { 00:25:05.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.427 "allow_any_host": false, 00:25:05.427 "serial_number": "00000000000000000000", 00:25:05.427 "model_number": "SPDK bdev Controller", 00:25:05.427 "max_namespaces": 32, 00:25:05.427 "min_cntlid": 1, 00:25:05.427 "max_cntlid": 65519, 00:25:05.427 "ana_reporting": false 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "nvmf_subsystem_add_host", 00:25:05.427 "params": { 00:25:05.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.427 "host": "nqn.2016-06.io.spdk:host1", 00:25:05.427 "psk": "key0" 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "nvmf_subsystem_add_ns", 00:25:05.427 "params": { 00:25:05.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.427 "namespace": { 00:25:05.427 "nsid": 1, 00:25:05.427 "bdev_name": "malloc0", 00:25:05.427 "nguid": "451C72215B5945D5BA0A5453059119FF", 00:25:05.427 "uuid": "451c7221-5b59-45d5-ba0a-5453059119ff", 00:25:05.427 "no_auto_visible": false 00:25:05.427 } 00:25:05.427 } 00:25:05.427 }, 00:25:05.427 { 00:25:05.427 "method": "nvmf_subsystem_add_listener", 00:25:05.427 "params": { 00:25:05.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.427 "listen_address": { 00:25:05.427 "trtype": "TCP", 00:25:05.427 "adrfam": "IPv4", 00:25:05.427 "traddr": "10.0.0.2", 00:25:05.427 "trsvcid": "4420" 00:25:05.427 }, 00:25:05.427 "secure_channel": false, 00:25:05.427 "sock_impl": "ssl" 00:25:05.427 } 00:25:05.427 } 00:25:05.427 ] 00:25:05.427 } 00:25:05.427 ] 00:25:05.427 }' 00:25:05.427 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.427 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=394792 00:25:05.427 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:05.427 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 394792 00:25:05.427 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 394792 ']' 00:25:05.427 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.427 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.427 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.427 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.427 00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.427 [2024-12-10 00:06:40.299916] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:05.427 [2024-12-10 00:06:40.299961] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.687 [2024-12-10 00:06:40.380096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.687 [2024-12-10 00:06:40.420238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.687 [2024-12-10 00:06:40.420274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.687 [2024-12-10 00:06:40.420282] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.687 [2024-12-10 00:06:40.420288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.687 [2024-12-10 00:06:40.420293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.687 [2024-12-10 00:06:40.420844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.945 [2024-12-10 00:06:40.635938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.946 [2024-12-10 00:06:40.667969] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:05.946 [2024-12-10 00:06:40.668176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.207 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.207 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:06.207 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:06.207 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:06.207 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:06.470 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.470 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=395034 00:25:06.470 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 395034 /var/tmp/bdevperf.sock 00:25:06.470 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 395034 ']' 00:25:06.470 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.470 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:06.470 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.470 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.470 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:06.470 "subsystems": [ 00:25:06.470 { 00:25:06.470 "subsystem": "keyring", 00:25:06.470 "config": [ 00:25:06.470 { 00:25:06.470 "method": "keyring_file_add_key", 00:25:06.470 "params": { 00:25:06.470 "name": "key0", 00:25:06.470 "path": "/tmp/tmp.9m3ydLOiUq" 00:25:06.470 } 00:25:06.470 } 00:25:06.470 ] 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "subsystem": "iobuf", 00:25:06.470 "config": [ 00:25:06.470 { 00:25:06.470 "method": "iobuf_set_options", 00:25:06.470 "params": { 00:25:06.470 "small_pool_count": 8192, 00:25:06.470 "large_pool_count": 1024, 00:25:06.470 "small_bufsize": 8192, 00:25:06.470 "large_bufsize": 135168, 00:25:06.470 "enable_numa": false 00:25:06.470 } 00:25:06.470 } 00:25:06.470 ] 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "subsystem": "sock", 00:25:06.470 "config": [ 00:25:06.470 { 00:25:06.470 "method": "sock_set_default_impl", 00:25:06.470 "params": { 00:25:06.470 "impl_name": "posix" 00:25:06.470 } 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "method": "sock_impl_set_options", 00:25:06.470 "params": { 00:25:06.470 "impl_name": "ssl", 00:25:06.470 "recv_buf_size": 4096, 00:25:06.470 "send_buf_size": 4096, 00:25:06.470 "enable_recv_pipe": true, 00:25:06.470 "enable_quickack": false, 00:25:06.470 "enable_placement_id": 0, 00:25:06.470 "enable_zerocopy_send_server": true, 00:25:06.470 "enable_zerocopy_send_client": false, 00:25:06.470 "zerocopy_threshold": 0, 00:25:06.470 "tls_version": 0, 00:25:06.470 "enable_ktls": false 00:25:06.470 } 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "method": "sock_impl_set_options", 00:25:06.470 "params": { 00:25:06.470 "impl_name": "posix", 00:25:06.470 "recv_buf_size": 2097152, 00:25:06.470 "send_buf_size": 2097152, 00:25:06.470 "enable_recv_pipe": true, 00:25:06.470 "enable_quickack": false, 00:25:06.470 "enable_placement_id": 0, 00:25:06.470 "enable_zerocopy_send_server": true, 00:25:06.470 "enable_zerocopy_send_client": false, 00:25:06.470 "zerocopy_threshold": 0, 00:25:06.470 "tls_version": 0, 00:25:06.470 "enable_ktls": false 00:25:06.470 } 00:25:06.470 } 00:25:06.470 ] 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "subsystem": "vmd", 00:25:06.470 "config": [] 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "subsystem": "accel", 00:25:06.470 "config": [ 00:25:06.470 { 00:25:06.470 "method": "accel_set_options", 00:25:06.470 "params": { 00:25:06.470 "small_cache_size": 128, 00:25:06.470 "large_cache_size": 16, 00:25:06.470 "task_count": 2048, 00:25:06.470 "sequence_count": 2048, 00:25:06.470 "buf_count": 2048 00:25:06.470 } 00:25:06.470 } 00:25:06.470 ] 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "subsystem": "bdev", 00:25:06.470 "config": [ 00:25:06.470 { 00:25:06.470 "method": "bdev_set_options", 00:25:06.470 "params": { 00:25:06.470 "bdev_io_pool_size": 65535, 00:25:06.470 "bdev_io_cache_size": 256, 00:25:06.470 "bdev_auto_examine": true, 00:25:06.470 "iobuf_small_cache_size": 128, 00:25:06.470 "iobuf_large_cache_size": 16 00:25:06.470 } 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "method": "bdev_raid_set_options", 00:25:06.470 "params": { 00:25:06.470 "process_window_size_kb": 1024, 00:25:06.470 "process_max_bandwidth_mb_sec": 0 00:25:06.470 } 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "method": "bdev_iscsi_set_options", 00:25:06.470 "params": { 00:25:06.470 "timeout_sec": 30 00:25:06.470 } 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "method": "bdev_nvme_set_options", 00:25:06.470 "params": { 00:25:06.470 "action_on_timeout": "none", 00:25:06.470 "timeout_us": 0, 00:25:06.470 "timeout_admin_us": 0, 00:25:06.470 "keep_alive_timeout_ms": 10000, 00:25:06.470 "arbitration_burst": 0, 00:25:06.470 "low_priority_weight": 0, 00:25:06.470 "medium_priority_weight": 0, 00:25:06.470 "high_priority_weight": 0, 00:25:06.470 "nvme_adminq_poll_period_us": 10000, 00:25:06.470 "nvme_ioq_poll_period_us": 0, 00:25:06.470 "io_queue_requests": 512, 00:25:06.470 "delay_cmd_submit": true, 00:25:06.470 "transport_retry_count": 4, 00:25:06.470 "bdev_retry_count": 3, 00:25:06.470 "transport_ack_timeout": 0, 00:25:06.470 "ctrlr_loss_timeout_sec": 0, 00:25:06.470 "reconnect_delay_sec": 0, 00:25:06.470 "fast_io_fail_timeout_sec": 0, 00:25:06.470 "disable_auto_failback": false, 00:25:06.470 "generate_uuids": false, 00:25:06.470 "transport_tos": 0, 00:25:06.470 "nvme_error_stat": false, 00:25:06.470 "rdma_srq_size": 0, 00:25:06.470 "io_path_stat": false, 00:25:06.470 "allow_accel_sequence": false, 00:25:06.470 "rdma_max_cq_size": 0, 00:25:06.470 "rdma_cm_event_timeout_ms": 0, 00:25:06.470 "dhchap_digests": [ 00:25:06.470 "sha256", 00:25:06.470 "sha384", 00:25:06.470 "sha512" 00:25:06.470 ], 00:25:06.470 "dhchap_dhgroups": [ 00:25:06.470 "null", 00:25:06.470 "ffdhe2048", 00:25:06.470 "ffdhe3072", 00:25:06.470 "ffdhe4096", 00:25:06.470 "ffdhe6144", 00:25:06.470 "ffdhe8192" 00:25:06.470 ], 00:25:06.470 "rdma_umr_per_io": false 00:25:06.470 } 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "method": "bdev_nvme_attach_controller", 00:25:06.470 "params": { 00:25:06.470 "name": "nvme0", 00:25:06.470 "trtype": "TCP", 00:25:06.470 "adrfam": "IPv4", 00:25:06.470 "traddr": "10.0.0.2", 00:25:06.470 "trsvcid": "4420", 00:25:06.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:06.470 "prchk_reftag": false, 00:25:06.470 "prchk_guard": false, 00:25:06.470 "ctrlr_loss_timeout_sec": 0, 00:25:06.470 "reconnect_delay_sec": 0, 00:25:06.470 "fast_io_fail_timeout_sec": 0, 00:25:06.470 "psk": "key0", 00:25:06.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:06.470 "hdgst": false, 00:25:06.470 "ddgst": false, 00:25:06.470 "multipath": "multipath" 00:25:06.470 } 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "method": "bdev_nvme_set_hotplug", 00:25:06.470 "params": { 00:25:06.470 "period_us": 100000, 00:25:06.470 "enable": false 00:25:06.470 } 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "method": "bdev_enable_histogram", 00:25:06.470 "params": { 00:25:06.470 "name": "nvme0n1", 00:25:06.470 "enable": true 00:25:06.470 } 00:25:06.470 }, 00:25:06.470 { 00:25:06.470 "method": "bdev_wait_for_examine" 00:25:06.470 } 00:25:06.470 ] 00:25:06.471 }, 00:25:06.471 { 00:25:06.471 "subsystem": "nbd", 00:25:06.471 "config": [] 00:25:06.471 } 00:25:06.471 ] 00:25:06.471 }' 00:25:06.471 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.471 00:06:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:06.471 [2024-12-10 00:06:41.220905] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:06.471 [2024-12-10 00:06:41.220950] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395034 ] 00:25:06.471 [2024-12-10 00:06:41.297543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.471 [2024-12-10 00:06:41.337066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.730 [2024-12-10 00:06:41.491529] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:07.297 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.297 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:07.297 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:07.297 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:07.297 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.297 00:06:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:07.555 Running I/O for 1 seconds... 00:25:08.492 5298.00 IOPS, 20.70 MiB/s 00:25:08.492 Latency(us) 00:25:08.492 [2024-12-09T23:06:43.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.492 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:08.492 Verification LBA range: start 0x0 length 0x2000 00:25:08.492 nvme0n1 : 1.01 5346.75 20.89 0.00 0.00 23773.92 5100.41 27582.11 00:25:08.492 [2024-12-09T23:06:43.428Z] =================================================================================================================== 00:25:08.492 [2024-12-09T23:06:43.428Z] Total : 5346.75 20.89 0.00 0.00 23773.92 5100.41 27582.11 00:25:08.492 { 00:25:08.492 "results": [ 00:25:08.492 { 00:25:08.492 "job": "nvme0n1", 00:25:08.492 "core_mask": "0x2", 00:25:08.492 "workload": "verify", 00:25:08.492 "status": "finished", 00:25:08.492 "verify_range": { 00:25:08.492 "start": 0, 00:25:08.492 "length": 8192 00:25:08.492 }, 00:25:08.492 "queue_depth": 128, 00:25:08.492 "io_size": 4096, 00:25:08.492 "runtime": 1.014823, 00:25:08.492 "iops": 5346.74519596028, 00:25:08.492 "mibps": 20.885723421719845, 00:25:08.492 "io_failed": 0, 00:25:08.492 "io_timeout": 0, 00:25:08.492 "avg_latency_us": 23773.92470520361, 00:25:08.492 "min_latency_us": 5100.410434782609, 00:25:08.492 "max_latency_us": 27582.107826086958 00:25:08.492 } 00:25:08.492 ], 00:25:08.492 "core_count": 1 00:25:08.492 } 00:25:08.492 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:08.492 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:08.492 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:08.492 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:25:08.492 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:25:08.492 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:08.492 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:08.492 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:08.492 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:08.492 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:08.492 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:08.492 nvmf_trace.0 00:25:08.750 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 395034 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 395034 ']' 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 395034 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395034 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395034' 00:25:08.751 killing process with pid 395034 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 395034 00:25:08.751 Received shutdown signal, test time was about 1.000000 seconds 00:25:08.751 00:25:08.751 Latency(us) 00:25:08.751 [2024-12-09T23:06:43.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.751 [2024-12-09T23:06:43.687Z] =================================================================================================================== 00:25:08.751 [2024-12-09T23:06:43.687Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 395034 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.751 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.751 rmmod nvme_tcp 00:25:08.751 rmmod nvme_fabrics 00:25:09.010 rmmod nvme_keyring 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 394792 ']' 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 394792 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 394792 ']' 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 394792 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394792 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394792' 00:25:09.010 killing process with pid 394792 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 394792 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 394792 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.010 00:06:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.550 00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.BkwxuxBUJv /tmp/tmp.fCb6x47DNS /tmp/tmp.9m3ydLOiUq 00:25:11.550 00:25:11.550 real 1m19.630s 00:25:11.550 user 2m3.121s 00:25:11.550 sys 0m29.514s 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.550 ************************************ 00:25:11.550 END TEST nvmf_tls 00:25:11.550 ************************************ 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:11.550 ************************************ 00:25:11.550 START TEST nvmf_fips 00:25:11.550 ************************************ 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:11.550 * Looking for test storage... 00:25:11.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.550 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:11.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.551 --rc genhtml_branch_coverage=1 00:25:11.551 --rc genhtml_function_coverage=1 00:25:11.551 --rc genhtml_legend=1 00:25:11.551 --rc geninfo_all_blocks=1 00:25:11.551 --rc geninfo_unexecuted_blocks=1 00:25:11.551 00:25:11.551 ' 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:11.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.551 --rc genhtml_branch_coverage=1 00:25:11.551 --rc genhtml_function_coverage=1 00:25:11.551 --rc genhtml_legend=1 00:25:11.551 --rc geninfo_all_blocks=1 00:25:11.551 --rc geninfo_unexecuted_blocks=1 00:25:11.551 00:25:11.551 ' 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:11.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.551 --rc genhtml_branch_coverage=1 00:25:11.551 --rc genhtml_function_coverage=1 00:25:11.551 --rc genhtml_legend=1 00:25:11.551 --rc geninfo_all_blocks=1 00:25:11.551 --rc geninfo_unexecuted_blocks=1 00:25:11.551 00:25:11.551 ' 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:11.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.551 --rc genhtml_branch_coverage=1 00:25:11.551 --rc genhtml_function_coverage=1 00:25:11.551 --rc genhtml_legend=1 00:25:11.551 --rc geninfo_all_blocks=1 00:25:11.551 --rc geninfo_unexecuted_blocks=1 00:25:11.551 00:25:11.551 ' 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.551 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:25:11.552 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:25:11.812 Error setting digest 00:25:11.812 40F2E245047F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:11.812 40F2E245047F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.812 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.813 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.813 00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:18.383 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:18.383 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:18.383 Found net devices under 0000:86:00.0: cvl_0_0 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:18.383 Found net devices under 0000:86:00.1: cvl_0_1 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:18.383 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:18.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:25:18.384 00:25:18.384 --- 10.0.0.2 ping statistics --- 00:25:18.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.384 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:25:18.384 00:25:18.384 --- 10.0.0.1 ping statistics --- 00:25:18.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.384 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=398959 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 398959 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 398959 ']' 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.384 00:06:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:18.384 [2024-12-10 00:06:52.519342] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:18.384 [2024-12-10 00:06:52.519391] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.384 [2024-12-10 00:06:52.601271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.384 [2024-12-10 00:06:52.641489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.384 [2024-12-10 00:06:52.641526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.384 [2024-12-10 00:06:52.641532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.384 [2024-12-10 00:06:52.641538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.384 [2024-12-10 00:06:52.641543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.384 [2024-12-10 00:06:52.642075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.rOZ 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.rOZ 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.rOZ 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.rOZ 00:25:18.644 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:25:18.644 [2024-12-10 00:06:53.555411] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.644 [2024-12-10 00:06:53.571421] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:18.645 [2024-12-10 00:06:53.571621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.904 malloc0 00:25:18.904 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:18.904 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=399095 00:25:18.904 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:18.904 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 399095 /var/tmp/bdevperf.sock 00:25:18.904 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 399095 ']' 00:25:18.904 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:18.904 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.904 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:18.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:18.904 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.904 00:06:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:18.904 [2024-12-10 00:06:53.703058] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:18.904 [2024-12-10 00:06:53.703108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid399095 ] 00:25:18.904 [2024-12-10 00:06:53.779807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.904 [2024-12-10 00:06:53.819740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.841 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.841 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:19.841 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.rOZ 00:25:19.841 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:20.101 [2024-12-10 00:06:54.877597] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:20.101 TLSTESTn1 00:25:20.101 00:06:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:20.358 Running I/O for 10 seconds... 00:25:22.236 4144.00 IOPS, 16.19 MiB/s [2024-12-09T23:06:58.109Z] 4173.00 IOPS, 16.30 MiB/s [2024-12-09T23:06:59.490Z] 4391.67 IOPS, 17.15 MiB/s [2024-12-09T23:07:00.427Z] 4598.50 IOPS, 17.96 MiB/s [2024-12-09T23:07:01.364Z] 4768.40 IOPS, 18.63 MiB/s [2024-12-09T23:07:02.301Z] 4839.33 IOPS, 18.90 MiB/s [2024-12-09T23:07:03.239Z] 4770.14 IOPS, 18.63 MiB/s [2024-12-09T23:07:04.202Z] 4851.50 IOPS, 18.95 MiB/s [2024-12-09T23:07:05.140Z] 4918.33 IOPS, 19.21 MiB/s [2024-12-09T23:07:05.140Z] 4950.80 IOPS, 19.34 MiB/s 00:25:30.204 Latency(us) 00:25:30.204 [2024-12-09T23:07:05.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.204 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:30.204 Verification LBA range: start 0x0 length 0x2000 00:25:30.204 TLSTESTn1 : 10.01 4957.02 19.36 0.00 0.00 25786.19 5185.89 36472.21 00:25:30.204 [2024-12-09T23:07:05.140Z] =================================================================================================================== 00:25:30.204 [2024-12-09T23:07:05.140Z] Total : 4957.02 19.36 0.00 0.00 25786.19 5185.89 36472.21 00:25:30.204 { 00:25:30.204 "results": [ 00:25:30.204 { 00:25:30.204 "job": "TLSTESTn1", 00:25:30.204 "core_mask": "0x4", 00:25:30.204 "workload": "verify", 00:25:30.204 "status": "finished", 00:25:30.204 "verify_range": { 00:25:30.204 "start": 0, 00:25:30.204 "length": 8192 00:25:30.204 }, 00:25:30.204 "queue_depth": 128, 00:25:30.204 "io_size": 4096, 00:25:30.204 "runtime": 10.012874, 00:25:30.204 "iops": 4957.018334596041, 00:25:30.204 "mibps": 19.363352869515786, 00:25:30.204 "io_failed": 0, 00:25:30.204 "io_timeout": 0, 00:25:30.204 "avg_latency_us": 25786.18730456507, 00:25:30.204 "min_latency_us": 5185.892173913044, 00:25:30.204 "max_latency_us": 36472.208695652174 00:25:30.204 } 00:25:30.204 ], 00:25:30.204 "core_count": 1 00:25:30.204 } 00:25:30.204 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:30.204 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:30.204 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:25:30.204 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:25:30.204 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:30.204 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:30.204 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:30.204 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:30.204 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:30.204 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:30.204 nvmf_trace.0 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 399095 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 399095 ']' 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 399095 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 399095 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 399095' 00:25:30.464 killing process with pid 399095 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 399095 00:25:30.464 Received shutdown signal, test time was about 10.000000 seconds 00:25:30.464 00:25:30.464 Latency(us) 00:25:30.464 [2024-12-09T23:07:05.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.464 [2024-12-09T23:07:05.400Z] =================================================================================================================== 00:25:30.464 [2024-12-09T23:07:05.400Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.464 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 399095 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:30.723 rmmod nvme_tcp 00:25:30.723 rmmod nvme_fabrics 00:25:30.723 rmmod nvme_keyring 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 398959 ']' 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 398959 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 398959 ']' 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 398959 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 398959 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 398959' 00:25:30.723 killing process with pid 398959 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 398959 00:25:30.723 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 398959 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.982 00:07:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.888 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:32.888 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.rOZ 00:25:32.888 00:25:32.888 real 0m21.680s 00:25:32.888 user 0m23.723s 00:25:32.888 sys 0m9.334s 00:25:32.888 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.888 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:32.888 ************************************ 00:25:32.888 END TEST nvmf_fips 00:25:32.888 ************************************ 00:25:32.888 00:07:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:32.888 00:07:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:32.888 00:07:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.888 00:07:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:33.149 ************************************ 00:25:33.149 START TEST nvmf_control_msg_list 00:25:33.149 ************************************ 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:33.149 * Looking for test storage... 00:25:33.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.149 00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.149 --rc genhtml_branch_coverage=1 00:25:33.149 --rc genhtml_function_coverage=1 00:25:33.149 --rc genhtml_legend=1 00:25:33.149 --rc geninfo_all_blocks=1 00:25:33.149 --rc geninfo_unexecuted_blocks=1 00:25:33.149 00:25:33.149 ' 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.149 --rc genhtml_branch_coverage=1 00:25:33.149 --rc genhtml_function_coverage=1 00:25:33.149 --rc genhtml_legend=1 00:25:33.149 --rc geninfo_all_blocks=1 00:25:33.149 --rc geninfo_unexecuted_blocks=1 00:25:33.149 00:25:33.149 ' 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.149 --rc genhtml_branch_coverage=1 00:25:33.149 --rc genhtml_function_coverage=1 00:25:33.149 --rc genhtml_legend=1 00:25:33.149 --rc geninfo_all_blocks=1 00:25:33.149 --rc geninfo_unexecuted_blocks=1 00:25:33.149 00:25:33.149 ' 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:33.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.149 --rc genhtml_branch_coverage=1 00:25:33.149 --rc genhtml_function_coverage=1 00:25:33.149 --rc genhtml_legend=1 00:25:33.149 --rc geninfo_all_blocks=1 00:25:33.149 --rc geninfo_unexecuted_blocks=1 00:25:33.149 00:25:33.149 ' 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:33.149 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:33.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:33.150 00:07:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:39.725 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:39.726 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:39.726 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:39.726 Found net devices under 0000:86:00.0: cvl_0_0 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:39.726 Found net devices under 0000:86:00.1: cvl_0_1 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:39.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:25:39.726 00:25:39.726 --- 10.0.0.2 ping statistics --- 00:25:39.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.726 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:39.726 00:25:39.726 --- 10.0.0.1 ping statistics --- 00:25:39.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.726 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=404637 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 404637 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 404637 ']' 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.726 00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:39.726 [2024-12-10 00:07:13.997770] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:39.726 [2024-12-10 00:07:13.997818] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.726 [2024-12-10 00:07:14.079475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.726 [2024-12-10 00:07:14.119364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.726 [2024-12-10 00:07:14.119399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.726 [2024-12-10 00:07:14.119407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.726 [2024-12-10 00:07:14.119412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.726 [2024-12-10 00:07:14.119417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.726 [2024-12-10 00:07:14.119999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.726 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:39.727 [2024-12-10 00:07:14.256443] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:39.727 Malloc0 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:39.727 [2024-12-10 00:07:14.296787] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=404699 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=404700 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=404701 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 404699 00:25:39.727 00:07:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:39.727 [2024-12-10 00:07:14.395586] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:39.727 [2024-12-10 00:07:14.395971] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:39.727 [2024-12-10 00:07:14.396292] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:40.665 Initializing NVMe Controllers 00:25:40.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:40.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:40.665 Initialization complete. Launching workers. 00:25:40.665 ======================================================== 00:25:40.665 Latency(us) 00:25:40.665 Device Information : IOPS MiB/s Average min max 00:25:40.665 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 30.00 0.12 34124.31 237.07 41103.70 00:25:40.665 ======================================================== 00:25:40.665 Total : 30.00 0.12 34124.31 237.07 41103.70 00:25:40.665 00:25:40.665 Initializing NVMe Controllers 00:25:40.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:40.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:40.665 Initialization complete. Launching workers. 00:25:40.665 ======================================================== 00:25:40.665 Latency(us) 00:25:40.665 Device Information : IOPS MiB/s Average min max 00:25:40.665 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6979.00 27.26 142.93 133.18 328.74 00:25:40.665 ======================================================== 00:25:40.665 Total : 6979.00 27.26 142.93 133.18 328.74 00:25:40.665 00:25:40.665 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 404700 00:25:40.665 Initializing NVMe Controllers 00:25:40.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:40.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:40.665 Initialization complete. Launching workers. 00:25:40.665 ======================================================== 00:25:40.665 Latency(us) 00:25:40.665 Device Information : IOPS MiB/s Average min max 00:25:40.665 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 32.00 0.12 31985.52 245.54 41190.99 00:25:40.665 ======================================================== 00:25:40.665 Total : 32.00 0.12 31985.52 245.54 41190.99 00:25:40.666 00:25:40.666 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 404701 00:25:40.666 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:40.666 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:40.666 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:40.666 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:40.666 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:40.666 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:40.666 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:40.666 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:40.666 rmmod nvme_tcp 00:25:40.925 rmmod nvme_fabrics 00:25:40.925 rmmod nvme_keyring 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 404637 ']' 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 404637 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 404637 ']' 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 404637 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 404637 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 404637' 00:25:40.925 killing process with pid 404637 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 404637 00:25:40.925 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 404637 00:25:41.184 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:41.184 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:41.185 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:41.185 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:41.185 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:41.185 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:41.185 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:41.185 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:41.185 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:41.185 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.185 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.185 00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.093 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:43.093 00:25:43.093 real 0m10.101s 00:25:43.093 user 0m6.641s 00:25:43.093 sys 0m5.387s 00:25:43.093 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:43.093 00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:43.093 ************************************ 00:25:43.093 END TEST nvmf_control_msg_list 00:25:43.093 ************************************ 00:25:43.093 00:07:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:43.093 00:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:43.093 00:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:43.093 00:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:43.093 ************************************ 00:25:43.093 START TEST nvmf_wait_for_buf 00:25:43.093 ************************************ 00:25:43.093 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:43.354 * Looking for test storage... 00:25:43.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.354 --rc genhtml_branch_coverage=1 00:25:43.354 --rc genhtml_function_coverage=1 00:25:43.354 --rc genhtml_legend=1 00:25:43.354 --rc geninfo_all_blocks=1 00:25:43.354 --rc geninfo_unexecuted_blocks=1 00:25:43.354 00:25:43.354 ' 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.354 --rc genhtml_branch_coverage=1 00:25:43.354 --rc genhtml_function_coverage=1 00:25:43.354 --rc genhtml_legend=1 00:25:43.354 --rc geninfo_all_blocks=1 00:25:43.354 --rc geninfo_unexecuted_blocks=1 00:25:43.354 00:25:43.354 ' 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.354 --rc genhtml_branch_coverage=1 00:25:43.354 --rc genhtml_function_coverage=1 00:25:43.354 --rc genhtml_legend=1 00:25:43.354 --rc geninfo_all_blocks=1 00:25:43.354 --rc geninfo_unexecuted_blocks=1 00:25:43.354 00:25:43.354 ' 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:43.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:43.354 --rc genhtml_branch_coverage=1 00:25:43.354 --rc genhtml_function_coverage=1 00:25:43.354 --rc genhtml_legend=1 00:25:43.354 --rc geninfo_all_blocks=1 00:25:43.354 --rc geninfo_unexecuted_blocks=1 00:25:43.354 00:25:43.354 ' 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.354 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:43.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:43.355 00:07:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:49.929 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:49.929 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:49.929 Found net devices under 0000:86:00.0: cvl_0_0 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:49.929 Found net devices under 0000:86:00.1: cvl_0_1 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.929 00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.929 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.929 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.929 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.929 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:25:49.929 00:25:49.929 --- 10.0.0.2 ping statistics --- 00:25:49.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.929 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:25:49.929 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:25:49.929 00:25:49.929 --- 10.0.0.1 ping statistics --- 00:25:49.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.929 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:25:49.929 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=408455 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 408455 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 408455 ']' 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.930 00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:49.930 [2024-12-10 00:07:24.206678] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:49.930 [2024-12-10 00:07:24.206724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.930 [2024-12-10 00:07:24.284042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.930 [2024-12-10 00:07:24.323460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.930 [2024-12-10 00:07:24.323494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.930 [2024-12-10 00:07:24.323500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.930 [2024-12-10 00:07:24.323506] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.930 [2024-12-10 00:07:24.323511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.930 [2024-12-10 00:07:24.324091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.189 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:50.449 Malloc0 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:50.449 [2024-12-10 00:07:25.173295] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:50.449 [2024-12-10 00:07:25.201492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.449 00:07:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:50.449 [2024-12-10 00:07:25.285226] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:51.830 Initializing NVMe Controllers 00:25:51.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:51.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:51.830 Initialization complete. Launching workers. 00:25:51.830 ======================================================== 00:25:51.830 Latency(us) 00:25:51.830 Device Information : IOPS MiB/s Average min max 00:25:51.830 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33571.18 30951.72 71059.75 00:25:51.830 ======================================================== 00:25:51.830 Total : 124.00 15.50 33571.18 30951.72 71059.75 00:25:51.830 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.830 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:51.830 rmmod nvme_tcp 00:25:52.089 rmmod nvme_fabrics 00:25:52.089 rmmod nvme_keyring 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 408455 ']' 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 408455 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 408455 ']' 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 408455 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 408455 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 408455' 00:25:52.089 killing process with pid 408455 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 408455 00:25:52.089 00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 408455 00:25:52.089 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:52.089 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:52.089 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:52.349 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:52.349 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:52.349 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:52.349 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:52.349 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:52.349 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:52.349 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.349 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.349 00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.255 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:54.255 00:25:54.255 real 0m11.089s 00:25:54.255 user 0m4.802s 00:25:54.255 sys 0m4.876s 00:25:54.255 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:54.255 00:07:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:54.255 ************************************ 00:25:54.255 END TEST nvmf_wait_for_buf 00:25:54.255 ************************************ 00:25:54.255 00:07:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:54.255 00:07:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:54.255 00:07:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:25:54.255 00:07:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:25:54.255 00:07:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:25:54.255 00:07:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:00.830 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:00.830 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:00.830 Found net devices under 0000:86:00.0: cvl_0_0 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.830 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:00.831 Found net devices under 0000:86:00.1: cvl_0_1 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:00.831 ************************************ 00:26:00.831 START TEST nvmf_perf_adq 00:26:00.831 ************************************ 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:00.831 * Looking for test storage... 00:26:00.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:00.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.831 --rc genhtml_branch_coverage=1 00:26:00.831 --rc genhtml_function_coverage=1 00:26:00.831 --rc genhtml_legend=1 00:26:00.831 --rc geninfo_all_blocks=1 00:26:00.831 --rc geninfo_unexecuted_blocks=1 00:26:00.831 00:26:00.831 ' 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:00.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.831 --rc genhtml_branch_coverage=1 00:26:00.831 --rc genhtml_function_coverage=1 00:26:00.831 --rc genhtml_legend=1 00:26:00.831 --rc geninfo_all_blocks=1 00:26:00.831 --rc geninfo_unexecuted_blocks=1 00:26:00.831 00:26:00.831 ' 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:00.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.831 --rc genhtml_branch_coverage=1 00:26:00.831 --rc genhtml_function_coverage=1 00:26:00.831 --rc genhtml_legend=1 00:26:00.831 --rc geninfo_all_blocks=1 00:26:00.831 --rc geninfo_unexecuted_blocks=1 00:26:00.831 00:26:00.831 ' 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:00.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.831 --rc genhtml_branch_coverage=1 00:26:00.831 --rc genhtml_function_coverage=1 00:26:00.831 --rc genhtml_legend=1 00:26:00.831 --rc geninfo_all_blocks=1 00:26:00.831 --rc geninfo_unexecuted_blocks=1 00:26:00.831 00:26:00.831 ' 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.831 00:07:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.831 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:00.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:00.832 00:07:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:06.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:06.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:06.110 Found net devices under 0000:86:00.0: cvl_0_0 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:06.110 Found net devices under 0000:86:00.1: cvl_0_1 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:06.110 00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:07.491 00:07:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:10.780 00:07:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:16.058 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:16.058 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:16.058 Found net devices under 0000:86:00.0: cvl_0_0 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:16.058 Found net devices under 0000:86:00.1: cvl_0_1 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:16.058 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.059 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.319 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.319 00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:16.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:26:16.319 00:26:16.319 --- 10.0.0.2 ping statistics --- 00:26:16.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.319 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:26:16.319 00:26:16.319 --- 10.0.0.1 ping statistics --- 00:26:16.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.319 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=417169 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 417169 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 417169 ']' 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.319 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:16.319 [2024-12-10 00:07:51.114203] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:16.319 [2024-12-10 00:07:51.114251] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.319 [2024-12-10 00:07:51.192447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:16.319 [2024-12-10 00:07:51.237576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.319 [2024-12-10 00:07:51.237612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.319 [2024-12-10 00:07:51.237620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.319 [2024-12-10 00:07:51.237627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.319 [2024-12-10 00:07:51.237631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.319 [2024-12-10 00:07:51.240179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.319 [2024-12-10 00:07:51.240221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.319 [2024-12-10 00:07:51.240329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:16.319 [2024-12-10 00:07:51.240329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.257 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.257 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:26:17.257 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:17.257 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:17.257 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:17.257 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.257 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:26:17.257 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:17.257 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:17.257 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.257 00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:17.257 [2024-12-10 00:07:52.128797] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:17.257 Malloc1 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.257 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:17.517 [2024-12-10 00:07:52.192560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.517 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.517 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=417282 00:26:17.517 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:26:17.517 00:07:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:19.443 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:26:19.443 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.443 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.443 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.443 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:26:19.443 "tick_rate": 2300000000, 00:26:19.443 "poll_groups": [ 00:26:19.443 { 00:26:19.443 "name": "nvmf_tgt_poll_group_000", 00:26:19.443 "admin_qpairs": 1, 00:26:19.443 "io_qpairs": 1, 00:26:19.443 "current_admin_qpairs": 1, 00:26:19.443 "current_io_qpairs": 1, 00:26:19.443 "pending_bdev_io": 0, 00:26:19.443 "completed_nvme_io": 19624, 00:26:19.443 "transports": [ 00:26:19.443 { 00:26:19.443 "trtype": "TCP" 00:26:19.443 } 00:26:19.443 ] 00:26:19.443 }, 00:26:19.443 { 00:26:19.443 "name": "nvmf_tgt_poll_group_001", 00:26:19.443 "admin_qpairs": 0, 00:26:19.443 "io_qpairs": 1, 00:26:19.443 "current_admin_qpairs": 0, 00:26:19.443 "current_io_qpairs": 1, 00:26:19.443 "pending_bdev_io": 0, 00:26:19.443 "completed_nvme_io": 20178, 00:26:19.443 "transports": [ 00:26:19.443 { 00:26:19.443 "trtype": "TCP" 00:26:19.443 } 00:26:19.443 ] 00:26:19.443 }, 00:26:19.443 { 00:26:19.443 "name": "nvmf_tgt_poll_group_002", 00:26:19.443 "admin_qpairs": 0, 00:26:19.443 "io_qpairs": 1, 00:26:19.443 "current_admin_qpairs": 0, 00:26:19.443 "current_io_qpairs": 1, 00:26:19.443 "pending_bdev_io": 0, 00:26:19.443 "completed_nvme_io": 20192, 00:26:19.443 "transports": [ 00:26:19.443 { 00:26:19.443 "trtype": "TCP" 00:26:19.443 } 00:26:19.443 ] 00:26:19.443 }, 00:26:19.443 { 00:26:19.443 "name": "nvmf_tgt_poll_group_003", 00:26:19.443 "admin_qpairs": 0, 00:26:19.443 "io_qpairs": 1, 00:26:19.443 "current_admin_qpairs": 0, 00:26:19.443 "current_io_qpairs": 1, 00:26:19.443 "pending_bdev_io": 0, 00:26:19.443 "completed_nvme_io": 19553, 00:26:19.443 "transports": [ 00:26:19.443 { 00:26:19.443 "trtype": "TCP" 00:26:19.443 } 00:26:19.443 ] 00:26:19.443 } 00:26:19.443 ] 00:26:19.443 }' 00:26:19.443 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:19.443 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:26:19.443 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:26:19.443 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:26:19.443 00:07:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 417282 00:26:27.584 Initializing NVMe Controllers 00:26:27.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:27.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:27.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:27.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:27.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:27.584 Initialization complete. Launching workers. 00:26:27.584 ======================================================== 00:26:27.584 Latency(us) 00:26:27.584 Device Information : IOPS MiB/s Average min max 00:26:27.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10658.30 41.63 6004.87 2380.55 9965.06 00:26:27.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10689.80 41.76 5988.07 2135.72 10202.41 00:26:27.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10424.30 40.72 6139.03 2490.69 10310.21 00:26:27.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10524.30 41.11 6081.89 2290.01 10068.08 00:26:27.584 ======================================================== 00:26:27.584 Total : 42296.70 165.22 6052.85 2135.72 10310.21 00:26:27.584 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:27.584 rmmod nvme_tcp 00:26:27.584 rmmod nvme_fabrics 00:26:27.584 rmmod nvme_keyring 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 417169 ']' 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 417169 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 417169 ']' 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 417169 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 417169 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 417169' 00:26:27.584 killing process with pid 417169 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 417169 00:26:27.584 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 417169 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.844 00:08:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.385 00:08:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:30.385 00:08:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:26:30.385 00:08:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:30.385 00:08:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:30.953 00:08:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:33.491 00:08:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:38.773 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:38.773 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:38.773 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:38.774 Found net devices under 0000:86:00.0: cvl_0_0 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:38.774 Found net devices under 0000:86:00.1: cvl_0_1 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:38.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:26:38.774 00:26:38.774 --- 10.0.0.2 ping statistics --- 00:26:38.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.774 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:38.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:26:38.774 00:26:38.774 --- 10.0.0.1 ping statistics --- 00:26:38.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.774 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:38.774 net.core.busy_poll = 1 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:38.774 net.core.busy_read = 1 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:38.774 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=421193 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 421193 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 421193 ']' 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.034 00:08:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.034 [2024-12-10 00:08:13.862430] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:39.034 [2024-12-10 00:08:13.862475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.034 [2024-12-10 00:08:13.943501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:39.294 [2024-12-10 00:08:13.985934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.294 [2024-12-10 00:08:13.985968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.294 [2024-12-10 00:08:13.985975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.294 [2024-12-10 00:08:13.985981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.294 [2024-12-10 00:08:13.985985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.294 [2024-12-10 00:08:13.987372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.294 [2024-12-10 00:08:13.987472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.294 [2024-12-10 00:08:13.987582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.294 [2024-12-10 00:08:13.987583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.295 [2024-12-10 00:08:14.184972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.295 Malloc1 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.295 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.555 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.555 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.555 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.555 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.555 [2024-12-10 00:08:14.238343] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.555 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.555 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=421323 00:26:39.555 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:26:39.555 00:08:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:41.462 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:26:41.462 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.462 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.462 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.462 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:26:41.462 "tick_rate": 2300000000, 00:26:41.462 "poll_groups": [ 00:26:41.462 { 00:26:41.462 "name": "nvmf_tgt_poll_group_000", 00:26:41.462 "admin_qpairs": 1, 00:26:41.462 "io_qpairs": 4, 00:26:41.462 "current_admin_qpairs": 1, 00:26:41.462 "current_io_qpairs": 4, 00:26:41.462 "pending_bdev_io": 0, 00:26:41.463 "completed_nvme_io": 43917, 00:26:41.463 "transports": [ 00:26:41.463 { 00:26:41.463 "trtype": "TCP" 00:26:41.463 } 00:26:41.463 ] 00:26:41.463 }, 00:26:41.463 { 00:26:41.463 "name": "nvmf_tgt_poll_group_001", 00:26:41.463 "admin_qpairs": 0, 00:26:41.463 "io_qpairs": 0, 00:26:41.463 "current_admin_qpairs": 0, 00:26:41.463 "current_io_qpairs": 0, 00:26:41.463 "pending_bdev_io": 0, 00:26:41.463 "completed_nvme_io": 0, 00:26:41.463 "transports": [ 00:26:41.463 { 00:26:41.463 "trtype": "TCP" 00:26:41.463 } 00:26:41.463 ] 00:26:41.463 }, 00:26:41.463 { 00:26:41.463 "name": "nvmf_tgt_poll_group_002", 00:26:41.463 "admin_qpairs": 0, 00:26:41.463 "io_qpairs": 0, 00:26:41.463 "current_admin_qpairs": 0, 00:26:41.463 "current_io_qpairs": 0, 00:26:41.463 "pending_bdev_io": 0, 00:26:41.463 "completed_nvme_io": 0, 00:26:41.463 "transports": [ 00:26:41.463 { 00:26:41.463 "trtype": "TCP" 00:26:41.463 } 00:26:41.463 ] 00:26:41.463 }, 00:26:41.463 { 00:26:41.463 "name": "nvmf_tgt_poll_group_003", 00:26:41.463 "admin_qpairs": 0, 00:26:41.463 "io_qpairs": 0, 00:26:41.463 "current_admin_qpairs": 0, 00:26:41.463 "current_io_qpairs": 0, 00:26:41.463 "pending_bdev_io": 0, 00:26:41.463 "completed_nvme_io": 0, 00:26:41.463 "transports": [ 00:26:41.463 { 00:26:41.463 "trtype": "TCP" 00:26:41.463 } 00:26:41.463 ] 00:26:41.463 } 00:26:41.463 ] 00:26:41.463 }' 00:26:41.463 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:41.463 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:26:41.463 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:26:41.463 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:26:41.463 00:08:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 421323 00:26:49.591 Initializing NVMe Controllers 00:26:49.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:49.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:49.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:49.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:49.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:49.591 Initialization complete. Launching workers. 00:26:49.591 ======================================================== 00:26:49.591 Latency(us) 00:26:49.591 Device Information : IOPS MiB/s Average min max 00:26:49.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5796.50 22.64 11044.37 1526.94 56844.25 00:26:49.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6010.49 23.48 10662.72 1320.75 57973.67 00:26:49.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5754.91 22.48 11157.36 1486.91 57372.26 00:26:49.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5301.64 20.71 12078.81 1472.88 56770.87 00:26:49.591 ======================================================== 00:26:49.591 Total : 22863.53 89.31 11212.35 1320.75 57973.67 00:26:49.591 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:49.591 rmmod nvme_tcp 00:26:49.591 rmmod nvme_fabrics 00:26:49.591 rmmod nvme_keyring 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 421193 ']' 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 421193 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 421193 ']' 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 421193 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.591 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 421193 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 421193' 00:26:49.851 killing process with pid 421193 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 421193 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 421193 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.851 00:08:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:26:52.403 00:26:52.403 real 0m51.974s 00:26:52.403 user 2m46.944s 00:26:52.403 sys 0m11.154s 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.403 ************************************ 00:26:52.403 END TEST nvmf_perf_adq 00:26:52.403 ************************************ 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:52.403 ************************************ 00:26:52.403 START TEST nvmf_shutdown 00:26:52.403 ************************************ 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:52.403 * Looking for test storage... 00:26:52.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:26:52.403 00:08:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:52.403 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:52.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.403 --rc genhtml_branch_coverage=1 00:26:52.403 --rc genhtml_function_coverage=1 00:26:52.403 --rc genhtml_legend=1 00:26:52.403 --rc geninfo_all_blocks=1 00:26:52.403 --rc geninfo_unexecuted_blocks=1 00:26:52.403 00:26:52.403 ' 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:52.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.404 --rc genhtml_branch_coverage=1 00:26:52.404 --rc genhtml_function_coverage=1 00:26:52.404 --rc genhtml_legend=1 00:26:52.404 --rc geninfo_all_blocks=1 00:26:52.404 --rc geninfo_unexecuted_blocks=1 00:26:52.404 00:26:52.404 ' 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:52.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.404 --rc genhtml_branch_coverage=1 00:26:52.404 --rc genhtml_function_coverage=1 00:26:52.404 --rc genhtml_legend=1 00:26:52.404 --rc geninfo_all_blocks=1 00:26:52.404 --rc geninfo_unexecuted_blocks=1 00:26:52.404 00:26:52.404 ' 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:52.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.404 --rc genhtml_branch_coverage=1 00:26:52.404 --rc genhtml_function_coverage=1 00:26:52.404 --rc genhtml_legend=1 00:26:52.404 --rc geninfo_all_blocks=1 00:26:52.404 --rc geninfo_unexecuted_blocks=1 00:26:52.404 00:26:52.404 ' 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.404 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:52.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:52.405 ************************************ 00:26:52.405 START TEST nvmf_shutdown_tc1 00:26:52.405 ************************************ 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:52.405 00:08:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:58.987 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:58.988 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:58.988 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:58.988 Found net devices under 0000:86:00.0: cvl_0_0 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:58.988 Found net devices under 0000:86:00.1: cvl_0_1 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.988 00:08:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:26:58.988 00:26:58.988 --- 10.0.0.2 ping statistics --- 00:26:58.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.988 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:26:58.988 00:26:58.988 --- 10.0.0.1 ping statistics --- 00:26:58.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.988 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=426544 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 426544 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 426544 ']' 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.988 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:58.988 [2024-12-10 00:08:33.213273] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:58.988 [2024-12-10 00:08:33.213318] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.989 [2024-12-10 00:08:33.292858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.989 [2024-12-10 00:08:33.334202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.989 [2024-12-10 00:08:33.334240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.989 [2024-12-10 00:08:33.334247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.989 [2024-12-10 00:08:33.334253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.989 [2024-12-10 00:08:33.334258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.989 [2024-12-10 00:08:33.335782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.989 [2024-12-10 00:08:33.335895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.989 [2024-12-10 00:08:33.336002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.989 [2024-12-10 00:08:33.336003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:58.989 [2024-12-10 00:08:33.473951] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.989 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:58.989 Malloc1 00:26:58.989 [2024-12-10 00:08:33.592298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.989 Malloc2 00:26:58.989 Malloc3 00:26:58.989 Malloc4 00:26:58.989 Malloc5 00:26:58.989 Malloc6 00:26:58.989 Malloc7 00:26:58.989 Malloc8 00:26:58.989 Malloc9 00:26:59.250 Malloc10 00:26:59.250 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.250 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:59.250 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:59.250 00:08:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=426810 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 426810 /var/tmp/bdevperf.sock 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 426810 ']' 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:59.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.250 { 00:26:59.250 "params": { 00:26:59.250 "name": "Nvme$subsystem", 00:26:59.250 "trtype": "$TEST_TRANSPORT", 00:26:59.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.250 "adrfam": "ipv4", 00:26:59.250 "trsvcid": "$NVMF_PORT", 00:26:59.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.250 "hdgst": ${hdgst:-false}, 00:26:59.250 "ddgst": ${ddgst:-false} 00:26:59.250 }, 00:26:59.250 "method": "bdev_nvme_attach_controller" 00:26:59.250 } 00:26:59.250 EOF 00:26:59.250 )") 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.250 { 00:26:59.250 "params": { 00:26:59.250 "name": "Nvme$subsystem", 00:26:59.250 "trtype": "$TEST_TRANSPORT", 00:26:59.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.250 "adrfam": "ipv4", 00:26:59.250 "trsvcid": "$NVMF_PORT", 00:26:59.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.250 "hdgst": ${hdgst:-false}, 00:26:59.250 "ddgst": ${ddgst:-false} 00:26:59.250 }, 00:26:59.250 "method": "bdev_nvme_attach_controller" 00:26:59.250 } 00:26:59.250 EOF 00:26:59.250 )") 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.250 { 00:26:59.250 "params": { 00:26:59.250 "name": "Nvme$subsystem", 00:26:59.250 "trtype": "$TEST_TRANSPORT", 00:26:59.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.250 "adrfam": "ipv4", 00:26:59.250 "trsvcid": "$NVMF_PORT", 00:26:59.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.250 "hdgst": ${hdgst:-false}, 00:26:59.250 "ddgst": ${ddgst:-false} 00:26:59.250 }, 00:26:59.250 "method": "bdev_nvme_attach_controller" 00:26:59.250 } 00:26:59.250 EOF 00:26:59.250 )") 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.250 { 00:26:59.250 "params": { 00:26:59.250 "name": "Nvme$subsystem", 00:26:59.250 "trtype": "$TEST_TRANSPORT", 00:26:59.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.250 "adrfam": "ipv4", 00:26:59.250 "trsvcid": "$NVMF_PORT", 00:26:59.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.250 "hdgst": ${hdgst:-false}, 00:26:59.250 "ddgst": ${ddgst:-false} 00:26:59.250 }, 00:26:59.250 "method": "bdev_nvme_attach_controller" 00:26:59.250 } 00:26:59.250 EOF 00:26:59.250 )") 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.250 { 00:26:59.250 "params": { 00:26:59.250 "name": "Nvme$subsystem", 00:26:59.250 "trtype": "$TEST_TRANSPORT", 00:26:59.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.250 "adrfam": "ipv4", 00:26:59.250 "trsvcid": "$NVMF_PORT", 00:26:59.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.250 "hdgst": ${hdgst:-false}, 00:26:59.250 "ddgst": ${ddgst:-false} 00:26:59.250 }, 00:26:59.250 "method": "bdev_nvme_attach_controller" 00:26:59.250 } 00:26:59.250 EOF 00:26:59.250 )") 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.250 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.250 { 00:26:59.250 "params": { 00:26:59.250 "name": "Nvme$subsystem", 00:26:59.251 "trtype": "$TEST_TRANSPORT", 00:26:59.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "$NVMF_PORT", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.251 "hdgst": ${hdgst:-false}, 00:26:59.251 "ddgst": ${ddgst:-false} 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 } 00:26:59.251 EOF 00:26:59.251 )") 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.251 { 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme$subsystem", 00:26:59.251 "trtype": "$TEST_TRANSPORT", 00:26:59.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "$NVMF_PORT", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.251 "hdgst": ${hdgst:-false}, 00:26:59.251 "ddgst": ${ddgst:-false} 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 } 00:26:59.251 EOF 00:26:59.251 )") 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:59.251 [2024-12-10 00:08:34.066270] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:59.251 [2024-12-10 00:08:34.066319] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.251 { 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme$subsystem", 00:26:59.251 "trtype": "$TEST_TRANSPORT", 00:26:59.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "$NVMF_PORT", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.251 "hdgst": ${hdgst:-false}, 00:26:59.251 "ddgst": ${ddgst:-false} 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 } 00:26:59.251 EOF 00:26:59.251 )") 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.251 { 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme$subsystem", 00:26:59.251 "trtype": "$TEST_TRANSPORT", 00:26:59.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "$NVMF_PORT", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.251 "hdgst": ${hdgst:-false}, 00:26:59.251 "ddgst": ${ddgst:-false} 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 } 00:26:59.251 EOF 00:26:59.251 )") 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.251 { 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme$subsystem", 00:26:59.251 "trtype": "$TEST_TRANSPORT", 00:26:59.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "$NVMF_PORT", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.251 "hdgst": ${hdgst:-false}, 00:26:59.251 "ddgst": ${ddgst:-false} 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 } 00:26:59.251 EOF 00:26:59.251 )") 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:26:59.251 00:08:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme1", 00:26:59.251 "trtype": "tcp", 00:26:59.251 "traddr": "10.0.0.2", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "4420", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.251 "hdgst": false, 00:26:59.251 "ddgst": false 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 },{ 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme2", 00:26:59.251 "trtype": "tcp", 00:26:59.251 "traddr": "10.0.0.2", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "4420", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:59.251 "hdgst": false, 00:26:59.251 "ddgst": false 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 },{ 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme3", 00:26:59.251 "trtype": "tcp", 00:26:59.251 "traddr": "10.0.0.2", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "4420", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:59.251 "hdgst": false, 00:26:59.251 "ddgst": false 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 },{ 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme4", 00:26:59.251 "trtype": "tcp", 00:26:59.251 "traddr": "10.0.0.2", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "4420", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:59.251 "hdgst": false, 00:26:59.251 "ddgst": false 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 },{ 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme5", 00:26:59.251 "trtype": "tcp", 00:26:59.251 "traddr": "10.0.0.2", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "4420", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:59.251 "hdgst": false, 00:26:59.251 "ddgst": false 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 },{ 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme6", 00:26:59.251 "trtype": "tcp", 00:26:59.251 "traddr": "10.0.0.2", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "4420", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:59.251 "hdgst": false, 00:26:59.251 "ddgst": false 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 },{ 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme7", 00:26:59.251 "trtype": "tcp", 00:26:59.251 "traddr": "10.0.0.2", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "4420", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:59.251 "hdgst": false, 00:26:59.251 "ddgst": false 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 },{ 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme8", 00:26:59.251 "trtype": "tcp", 00:26:59.251 "traddr": "10.0.0.2", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "4420", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:59.251 "hdgst": false, 00:26:59.251 "ddgst": false 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 },{ 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme9", 00:26:59.251 "trtype": "tcp", 00:26:59.251 "traddr": "10.0.0.2", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "4420", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:59.251 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:59.251 "hdgst": false, 00:26:59.251 "ddgst": false 00:26:59.251 }, 00:26:59.251 "method": "bdev_nvme_attach_controller" 00:26:59.251 },{ 00:26:59.251 "params": { 00:26:59.251 "name": "Nvme10", 00:26:59.251 "trtype": "tcp", 00:26:59.251 "traddr": "10.0.0.2", 00:26:59.251 "adrfam": "ipv4", 00:26:59.251 "trsvcid": "4420", 00:26:59.251 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:59.252 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:59.252 "hdgst": false, 00:26:59.252 "ddgst": false 00:26:59.252 }, 00:26:59.252 "method": "bdev_nvme_attach_controller" 00:26:59.252 }' 00:26:59.252 [2024-12-10 00:08:34.143141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.252 [2024-12-10 00:08:34.183206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.156 00:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.156 00:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:27:01.156 00:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:01.156 00:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.156 00:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:01.156 00:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.156 00:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 426810 00:27:01.156 00:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:27:01.156 00:08:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:27:02.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh: line 74: 426810 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 426544 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.094 { 00:27:02.094 "params": { 00:27:02.094 "name": "Nvme$subsystem", 00:27:02.094 "trtype": "$TEST_TRANSPORT", 00:27:02.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.094 "adrfam": "ipv4", 00:27:02.094 "trsvcid": "$NVMF_PORT", 00:27:02.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.094 "hdgst": ${hdgst:-false}, 00:27:02.094 "ddgst": ${ddgst:-false} 00:27:02.094 }, 00:27:02.094 "method": "bdev_nvme_attach_controller" 00:27:02.094 } 00:27:02.094 EOF 00:27:02.094 )") 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.094 { 00:27:02.094 "params": { 00:27:02.094 "name": "Nvme$subsystem", 00:27:02.094 "trtype": "$TEST_TRANSPORT", 00:27:02.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.094 "adrfam": "ipv4", 00:27:02.094 "trsvcid": "$NVMF_PORT", 00:27:02.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.094 "hdgst": ${hdgst:-false}, 00:27:02.094 "ddgst": ${ddgst:-false} 00:27:02.094 }, 00:27:02.094 "method": "bdev_nvme_attach_controller" 00:27:02.094 } 00:27:02.094 EOF 00:27:02.094 )") 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.094 { 00:27:02.094 "params": { 00:27:02.094 "name": "Nvme$subsystem", 00:27:02.094 "trtype": "$TEST_TRANSPORT", 00:27:02.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.094 "adrfam": "ipv4", 00:27:02.094 "trsvcid": "$NVMF_PORT", 00:27:02.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.094 "hdgst": ${hdgst:-false}, 00:27:02.094 "ddgst": ${ddgst:-false} 00:27:02.094 }, 00:27:02.094 "method": "bdev_nvme_attach_controller" 00:27:02.094 } 00:27:02.094 EOF 00:27:02.094 )") 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.094 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.094 { 00:27:02.094 "params": { 00:27:02.095 "name": "Nvme$subsystem", 00:27:02.095 "trtype": "$TEST_TRANSPORT", 00:27:02.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "$NVMF_PORT", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.095 "hdgst": ${hdgst:-false}, 00:27:02.095 "ddgst": ${ddgst:-false} 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 } 00:27:02.095 EOF 00:27:02.095 )") 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.095 { 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme$subsystem", 00:27:02.095 "trtype": "$TEST_TRANSPORT", 00:27:02.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "$NVMF_PORT", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.095 "hdgst": ${hdgst:-false}, 00:27:02.095 "ddgst": ${ddgst:-false} 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 } 00:27:02.095 EOF 00:27:02.095 )") 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.095 { 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme$subsystem", 00:27:02.095 "trtype": "$TEST_TRANSPORT", 00:27:02.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "$NVMF_PORT", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.095 "hdgst": ${hdgst:-false}, 00:27:02.095 "ddgst": ${ddgst:-false} 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 } 00:27:02.095 EOF 00:27:02.095 )") 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.095 { 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme$subsystem", 00:27:02.095 "trtype": "$TEST_TRANSPORT", 00:27:02.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "$NVMF_PORT", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.095 "hdgst": ${hdgst:-false}, 00:27:02.095 "ddgst": ${ddgst:-false} 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 } 00:27:02.095 EOF 00:27:02.095 )") 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:02.095 [2024-12-10 00:08:36.981426] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:02.095 [2024-12-10 00:08:36.981478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427307 ] 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.095 { 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme$subsystem", 00:27:02.095 "trtype": "$TEST_TRANSPORT", 00:27:02.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "$NVMF_PORT", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.095 "hdgst": ${hdgst:-false}, 00:27:02.095 "ddgst": ${ddgst:-false} 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 } 00:27:02.095 EOF 00:27:02.095 )") 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.095 { 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme$subsystem", 00:27:02.095 "trtype": "$TEST_TRANSPORT", 00:27:02.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "$NVMF_PORT", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.095 "hdgst": ${hdgst:-false}, 00:27:02.095 "ddgst": ${ddgst:-false} 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 } 00:27:02.095 EOF 00:27:02.095 )") 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.095 { 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme$subsystem", 00:27:02.095 "trtype": "$TEST_TRANSPORT", 00:27:02.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "$NVMF_PORT", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.095 "hdgst": ${hdgst:-false}, 00:27:02.095 "ddgst": ${ddgst:-false} 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 } 00:27:02.095 EOF 00:27:02.095 )") 00:27:02.095 00:08:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:02.095 00:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:02.095 00:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:02.095 00:08:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme1", 00:27:02.095 "trtype": "tcp", 00:27:02.095 "traddr": "10.0.0.2", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "4420", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:02.095 "hdgst": false, 00:27:02.095 "ddgst": false 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 },{ 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme2", 00:27:02.095 "trtype": "tcp", 00:27:02.095 "traddr": "10.0.0.2", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "4420", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:02.095 "hdgst": false, 00:27:02.095 "ddgst": false 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 },{ 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme3", 00:27:02.095 "trtype": "tcp", 00:27:02.095 "traddr": "10.0.0.2", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "4420", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:02.095 "hdgst": false, 00:27:02.095 "ddgst": false 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 },{ 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme4", 00:27:02.095 "trtype": "tcp", 00:27:02.095 "traddr": "10.0.0.2", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "4420", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:02.095 "hdgst": false, 00:27:02.095 "ddgst": false 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 },{ 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme5", 00:27:02.095 "trtype": "tcp", 00:27:02.095 "traddr": "10.0.0.2", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "4420", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:02.095 "hdgst": false, 00:27:02.095 "ddgst": false 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 },{ 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme6", 00:27:02.095 "trtype": "tcp", 00:27:02.095 "traddr": "10.0.0.2", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "4420", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:02.095 "hdgst": false, 00:27:02.095 "ddgst": false 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 },{ 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme7", 00:27:02.095 "trtype": "tcp", 00:27:02.095 "traddr": "10.0.0.2", 00:27:02.095 "adrfam": "ipv4", 00:27:02.095 "trsvcid": "4420", 00:27:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:02.095 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:02.095 "hdgst": false, 00:27:02.095 "ddgst": false 00:27:02.095 }, 00:27:02.095 "method": "bdev_nvme_attach_controller" 00:27:02.095 },{ 00:27:02.095 "params": { 00:27:02.095 "name": "Nvme8", 00:27:02.095 "trtype": "tcp", 00:27:02.095 "traddr": "10.0.0.2", 00:27:02.096 "adrfam": "ipv4", 00:27:02.096 "trsvcid": "4420", 00:27:02.096 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:02.096 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:02.096 "hdgst": false, 00:27:02.096 "ddgst": false 00:27:02.096 }, 00:27:02.096 "method": "bdev_nvme_attach_controller" 00:27:02.096 },{ 00:27:02.096 "params": { 00:27:02.096 "name": "Nvme9", 00:27:02.096 "trtype": "tcp", 00:27:02.096 "traddr": "10.0.0.2", 00:27:02.096 "adrfam": "ipv4", 00:27:02.096 "trsvcid": "4420", 00:27:02.096 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:02.096 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:02.096 "hdgst": false, 00:27:02.096 "ddgst": false 00:27:02.096 }, 00:27:02.096 "method": "bdev_nvme_attach_controller" 00:27:02.096 },{ 00:27:02.096 "params": { 00:27:02.096 "name": "Nvme10", 00:27:02.096 "trtype": "tcp", 00:27:02.096 "traddr": "10.0.0.2", 00:27:02.096 "adrfam": "ipv4", 00:27:02.096 "trsvcid": "4420", 00:27:02.096 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:02.096 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:02.096 "hdgst": false, 00:27:02.096 "ddgst": false 00:27:02.096 }, 00:27:02.096 "method": "bdev_nvme_attach_controller" 00:27:02.096 }' 00:27:02.354 [2024-12-10 00:08:37.060462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.354 [2024-12-10 00:08:37.100793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.741 Running I/O for 1 seconds... 00:27:04.940 2189.00 IOPS, 136.81 MiB/s 00:27:04.940 Latency(us) 00:27:04.940 [2024-12-09T23:08:39.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.940 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:04.940 Verification LBA range: start 0x0 length 0x400 00:27:04.940 Nvme1n1 : 1.16 275.33 17.21 0.00 0.00 229732.71 18919.96 218833.25 00:27:04.940 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:04.940 Verification LBA range: start 0x0 length 0x400 00:27:04.940 Nvme2n1 : 1.16 276.07 17.25 0.00 0.00 226575.23 18578.03 214274.23 00:27:04.940 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:04.940 Verification LBA range: start 0x0 length 0x400 00:27:04.940 Nvme3n1 : 1.12 293.42 18.34 0.00 0.00 206284.50 14702.86 218833.25 00:27:04.940 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:04.940 Verification LBA range: start 0x0 length 0x400 00:27:04.940 Nvme4n1 : 1.15 277.71 17.36 0.00 0.00 218751.42 13848.04 222480.47 00:27:04.940 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:04.940 Verification LBA range: start 0x0 length 0x400 00:27:04.940 Nvme5n1 : 1.07 238.29 14.89 0.00 0.00 250135.37 17324.30 228863.11 00:27:04.941 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:04.941 Verification LBA range: start 0x0 length 0x400 00:27:04.941 Nvme6n1 : 1.17 272.75 17.05 0.00 0.00 216674.66 18464.06 218833.25 00:27:04.941 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:04.941 Verification LBA range: start 0x0 length 0x400 00:27:04.941 Nvme7n1 : 1.17 277.34 17.33 0.00 0.00 209490.06 5613.30 217009.64 00:27:04.941 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:04.941 Verification LBA range: start 0x0 length 0x400 00:27:04.941 Nvme8n1 : 1.17 274.12 17.13 0.00 0.00 209211.04 14930.81 246187.41 00:27:04.941 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:04.941 Verification LBA range: start 0x0 length 0x400 00:27:04.941 Nvme9n1 : 1.18 271.26 16.95 0.00 0.00 208513.47 15158.76 237069.36 00:27:04.941 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:04.941 Verification LBA range: start 0x0 length 0x400 00:27:04.941 Nvme10n1 : 1.18 271.92 16.99 0.00 0.00 204755.48 17096.35 238892.97 00:27:04.941 [2024-12-09T23:08:39.877Z] =================================================================================================================== 00:27:04.941 [2024-12-09T23:08:39.877Z] Total : 2728.21 170.51 0.00 0.00 217315.18 5613.30 246187.41 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:05.201 rmmod nvme_tcp 00:27:05.201 rmmod nvme_fabrics 00:27:05.201 rmmod nvme_keyring 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 426544 ']' 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 426544 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 426544 ']' 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 426544 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:05.201 00:08:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426544 00:27:05.201 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:05.201 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:05.201 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426544' 00:27:05.201 killing process with pid 426544 00:27:05.201 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 426544 00:27:05.201 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 426544 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.770 00:08:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:07.678 00:27:07.678 real 0m15.375s 00:27:07.678 user 0m34.098s 00:27:07.678 sys 0m5.863s 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:07.678 ************************************ 00:27:07.678 END TEST nvmf_shutdown_tc1 00:27:07.678 ************************************ 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:07.678 ************************************ 00:27:07.678 START TEST nvmf_shutdown_tc2 00:27:07.678 ************************************ 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.678 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:07.679 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:07.679 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:07.679 Found net devices under 0000:86:00.0: cvl_0_0 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:07.679 Found net devices under 0000:86:00.1: cvl_0_1 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.679 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:07.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:27:07.939 00:27:07.939 --- 10.0.0.2 ping statistics --- 00:27:07.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.939 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:27:07.939 00:27:07.939 --- 10.0.0.1 ping statistics --- 00:27:07.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.939 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=428334 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 428334 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 428334 ']' 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:07.939 00:08:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.198 [2024-12-10 00:08:42.916971] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:08.198 [2024-12-10 00:08:42.917015] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.198 [2024-12-10 00:08:42.996269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.198 [2024-12-10 00:08:43.038195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.198 [2024-12-10 00:08:43.038233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.198 [2024-12-10 00:08:43.038240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.198 [2024-12-10 00:08:43.038246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.198 [2024-12-10 00:08:43.038251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.198 [2024-12-10 00:08:43.039720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.198 [2024-12-10 00:08:43.039829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.198 [2024-12-10 00:08:43.039933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.198 [2024-12-10 00:08:43.039934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.137 [2024-12-10 00:08:43.803987] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.137 00:08:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.137 Malloc1 00:27:09.137 [2024-12-10 00:08:43.917325] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.137 Malloc2 00:27:09.137 Malloc3 00:27:09.137 Malloc4 00:27:09.137 Malloc5 00:27:09.397 Malloc6 00:27:09.397 Malloc7 00:27:09.397 Malloc8 00:27:09.397 Malloc9 00:27:09.397 Malloc10 00:27:09.397 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.397 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:09.397 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.397 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=428612 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 428612 /var/tmp/bdevperf.sock 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 428612 ']' 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:09.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.661 { 00:27:09.661 "params": { 00:27:09.661 "name": "Nvme$subsystem", 00:27:09.661 "trtype": "$TEST_TRANSPORT", 00:27:09.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.661 "adrfam": "ipv4", 00:27:09.661 "trsvcid": "$NVMF_PORT", 00:27:09.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.661 "hdgst": ${hdgst:-false}, 00:27:09.661 "ddgst": ${ddgst:-false} 00:27:09.661 }, 00:27:09.661 "method": "bdev_nvme_attach_controller" 00:27:09.661 } 00:27:09.661 EOF 00:27:09.661 )") 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.661 { 00:27:09.661 "params": { 00:27:09.661 "name": "Nvme$subsystem", 00:27:09.661 "trtype": "$TEST_TRANSPORT", 00:27:09.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.661 "adrfam": "ipv4", 00:27:09.661 "trsvcid": "$NVMF_PORT", 00:27:09.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.661 "hdgst": ${hdgst:-false}, 00:27:09.661 "ddgst": ${ddgst:-false} 00:27:09.661 }, 00:27:09.661 "method": "bdev_nvme_attach_controller" 00:27:09.661 } 00:27:09.661 EOF 00:27:09.661 )") 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.661 { 00:27:09.661 "params": { 00:27:09.661 "name": "Nvme$subsystem", 00:27:09.661 "trtype": "$TEST_TRANSPORT", 00:27:09.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.661 "adrfam": "ipv4", 00:27:09.661 "trsvcid": "$NVMF_PORT", 00:27:09.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.661 "hdgst": ${hdgst:-false}, 00:27:09.661 "ddgst": ${ddgst:-false} 00:27:09.661 }, 00:27:09.661 "method": "bdev_nvme_attach_controller" 00:27:09.661 } 00:27:09.661 EOF 00:27:09.661 )") 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.661 { 00:27:09.661 "params": { 00:27:09.661 "name": "Nvme$subsystem", 00:27:09.661 "trtype": "$TEST_TRANSPORT", 00:27:09.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.661 "adrfam": "ipv4", 00:27:09.661 "trsvcid": "$NVMF_PORT", 00:27:09.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.661 "hdgst": ${hdgst:-false}, 00:27:09.661 "ddgst": ${ddgst:-false} 00:27:09.661 }, 00:27:09.661 "method": "bdev_nvme_attach_controller" 00:27:09.661 } 00:27:09.661 EOF 00:27:09.661 )") 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.661 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.661 { 00:27:09.661 "params": { 00:27:09.661 "name": "Nvme$subsystem", 00:27:09.661 "trtype": "$TEST_TRANSPORT", 00:27:09.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "$NVMF_PORT", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.662 "hdgst": ${hdgst:-false}, 00:27:09.662 "ddgst": ${ddgst:-false} 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 } 00:27:09.662 EOF 00:27:09.662 )") 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.662 { 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme$subsystem", 00:27:09.662 "trtype": "$TEST_TRANSPORT", 00:27:09.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "$NVMF_PORT", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.662 "hdgst": ${hdgst:-false}, 00:27:09.662 "ddgst": ${ddgst:-false} 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 } 00:27:09.662 EOF 00:27:09.662 )") 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.662 { 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme$subsystem", 00:27:09.662 "trtype": "$TEST_TRANSPORT", 00:27:09.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "$NVMF_PORT", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.662 "hdgst": ${hdgst:-false}, 00:27:09.662 "ddgst": ${ddgst:-false} 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 } 00:27:09.662 EOF 00:27:09.662 )") 00:27:09.662 [2024-12-10 00:08:44.389217] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:09.662 [2024-12-10 00:08:44.389265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428612 ] 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.662 { 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme$subsystem", 00:27:09.662 "trtype": "$TEST_TRANSPORT", 00:27:09.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "$NVMF_PORT", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.662 "hdgst": ${hdgst:-false}, 00:27:09.662 "ddgst": ${ddgst:-false} 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 } 00:27:09.662 EOF 00:27:09.662 )") 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.662 { 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme$subsystem", 00:27:09.662 "trtype": "$TEST_TRANSPORT", 00:27:09.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "$NVMF_PORT", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.662 "hdgst": ${hdgst:-false}, 00:27:09.662 "ddgst": ${ddgst:-false} 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 } 00:27:09.662 EOF 00:27:09.662 )") 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:09.662 { 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme$subsystem", 00:27:09.662 "trtype": "$TEST_TRANSPORT", 00:27:09.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "$NVMF_PORT", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.662 "hdgst": ${hdgst:-false}, 00:27:09.662 "ddgst": ${ddgst:-false} 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 } 00:27:09.662 EOF 00:27:09.662 )") 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:27:09.662 00:08:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme1", 00:27:09.662 "trtype": "tcp", 00:27:09.662 "traddr": "10.0.0.2", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "4420", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:09.662 "hdgst": false, 00:27:09.662 "ddgst": false 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 },{ 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme2", 00:27:09.662 "trtype": "tcp", 00:27:09.662 "traddr": "10.0.0.2", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "4420", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:09.662 "hdgst": false, 00:27:09.662 "ddgst": false 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 },{ 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme3", 00:27:09.662 "trtype": "tcp", 00:27:09.662 "traddr": "10.0.0.2", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "4420", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:09.662 "hdgst": false, 00:27:09.662 "ddgst": false 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 },{ 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme4", 00:27:09.662 "trtype": "tcp", 00:27:09.662 "traddr": "10.0.0.2", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "4420", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:09.662 "hdgst": false, 00:27:09.662 "ddgst": false 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 },{ 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme5", 00:27:09.662 "trtype": "tcp", 00:27:09.662 "traddr": "10.0.0.2", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "4420", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:09.662 "hdgst": false, 00:27:09.662 "ddgst": false 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 },{ 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme6", 00:27:09.662 "trtype": "tcp", 00:27:09.662 "traddr": "10.0.0.2", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "4420", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:09.662 "hdgst": false, 00:27:09.662 "ddgst": false 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 },{ 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme7", 00:27:09.662 "trtype": "tcp", 00:27:09.662 "traddr": "10.0.0.2", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "4420", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:09.662 "hdgst": false, 00:27:09.662 "ddgst": false 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 },{ 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme8", 00:27:09.662 "trtype": "tcp", 00:27:09.662 "traddr": "10.0.0.2", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "4420", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:09.662 "hdgst": false, 00:27:09.662 "ddgst": false 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 },{ 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme9", 00:27:09.662 "trtype": "tcp", 00:27:09.662 "traddr": "10.0.0.2", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "4420", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:09.662 "hdgst": false, 00:27:09.662 "ddgst": false 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 },{ 00:27:09.662 "params": { 00:27:09.662 "name": "Nvme10", 00:27:09.662 "trtype": "tcp", 00:27:09.662 "traddr": "10.0.0.2", 00:27:09.662 "adrfam": "ipv4", 00:27:09.662 "trsvcid": "4420", 00:27:09.662 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:09.662 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:09.662 "hdgst": false, 00:27:09.662 "ddgst": false 00:27:09.662 }, 00:27:09.662 "method": "bdev_nvme_attach_controller" 00:27:09.662 }' 00:27:09.662 [2024-12-10 00:08:44.467427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.662 [2024-12-10 00:08:44.507839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.574 Running I/O for 10 seconds... 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:11.574 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.836 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:11.836 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:11.836 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:12.097 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:12.097 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:12.097 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:12.097 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:12.097 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.097 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.097 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.097 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=84 00:27:12.097 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 84 -ge 100 ']' 00:27:12.097 00:08:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 428612 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 428612 ']' 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 428612 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428612 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428612' 00:27:12.365 killing process with pid 428612 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 428612 00:27:12.365 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 428612 00:27:12.365 Received shutdown signal, test time was about 0.913250 seconds 00:27:12.365 00:27:12.365 Latency(us) 00:27:12.365 [2024-12-09T23:08:47.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.365 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:12.365 Verification LBA range: start 0x0 length 0x400 00:27:12.365 Nvme1n1 : 0.90 292.55 18.28 0.00 0.00 215335.79 3761.20 217921.45 00:27:12.365 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:12.365 Verification LBA range: start 0x0 length 0x400 00:27:12.365 Nvme2n1 : 0.88 289.59 18.10 0.00 0.00 214357.93 16640.45 206979.78 00:27:12.365 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:12.365 Verification LBA range: start 0x0 length 0x400 00:27:12.365 Nvme3n1 : 0.90 283.38 17.71 0.00 0.00 214976.78 18692.01 212450.62 00:27:12.365 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:12.365 Verification LBA range: start 0x0 length 0x400 00:27:12.365 Nvme4n1 : 0.91 280.52 17.53 0.00 0.00 213764.67 14816.83 224304.08 00:27:12.365 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:12.365 Verification LBA range: start 0x0 length 0x400 00:27:12.365 Nvme5n1 : 0.90 283.64 17.73 0.00 0.00 207328.61 21655.37 218833.25 00:27:12.365 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:12.365 Verification LBA range: start 0x0 length 0x400 00:27:12.365 Nvme6n1 : 0.91 281.27 17.58 0.00 0.00 205222.96 17096.35 221568.67 00:27:12.365 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:12.365 Verification LBA range: start 0x0 length 0x400 00:27:12.365 Nvme7n1 : 0.89 292.12 18.26 0.00 0.00 192730.10 3761.20 221568.67 00:27:12.365 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:12.365 Verification LBA range: start 0x0 length 0x400 00:27:12.365 Nvme8n1 : 0.91 282.39 17.65 0.00 0.00 196380.94 17666.23 218833.25 00:27:12.365 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:12.365 Verification LBA range: start 0x0 length 0x400 00:27:12.365 Nvme9n1 : 0.88 224.25 14.02 0.00 0.00 238968.54 4160.11 229774.91 00:27:12.365 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:12.365 Verification LBA range: start 0x0 length 0x400 00:27:12.365 Nvme10n1 : 0.89 221.37 13.84 0.00 0.00 236834.43 8035.28 237069.36 00:27:12.365 [2024-12-09T23:08:47.301Z] =================================================================================================================== 00:27:12.365 [2024-12-09T23:08:47.301Z] Total : 2731.09 170.69 0.00 0.00 212372.37 3761.20 237069.36 00:27:12.623 00:08:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 428334 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.561 rmmod nvme_tcp 00:27:13.561 rmmod nvme_fabrics 00:27:13.561 rmmod nvme_keyring 00:27:13.561 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 428334 ']' 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 428334 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 428334 ']' 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 428334 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428334 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428334' 00:27:13.821 killing process with pid 428334 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 428334 00:27:13.821 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 428334 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.080 00:08:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.628 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:16.628 00:27:16.628 real 0m8.431s 00:27:16.628 user 0m26.483s 00:27:16.628 sys 0m1.449s 00:27:16.628 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.628 00:08:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:16.628 ************************************ 00:27:16.628 END TEST nvmf_shutdown_tc2 00:27:16.628 ************************************ 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:16.628 ************************************ 00:27:16.628 START TEST nvmf_shutdown_tc3 00:27:16.628 ************************************ 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:27:16.628 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:16.629 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:16.629 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:16.629 Found net devices under 0000:86:00.0: cvl_0_0 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:16.629 Found net devices under 0000:86:00.1: cvl_0_1 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:16.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:27:16.629 00:27:16.629 --- 10.0.0.2 ping statistics --- 00:27:16.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.629 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:27:16.629 00:27:16.629 --- 10.0.0.1 ping statistics --- 00:27:16.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.629 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.629 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=429885 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 429885 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 429885 ']' 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.630 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.630 [2024-12-10 00:08:51.438889] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:16.630 [2024-12-10 00:08:51.438937] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.630 [2024-12-10 00:08:51.516829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.630 [2024-12-10 00:08:51.556438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.630 [2024-12-10 00:08:51.556477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.630 [2024-12-10 00:08:51.556485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.630 [2024-12-10 00:08:51.556491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.630 [2024-12-10 00:08:51.556496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.630 [2024-12-10 00:08:51.558129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.630 [2024-12-10 00:08:51.558255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.630 [2024-12-10 00:08:51.558359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.630 [2024-12-10 00:08:51.558360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.890 [2024-12-10 00:08:51.703137] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.890 00:08:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.890 Malloc1 00:27:16.890 [2024-12-10 00:08:51.803720] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.150 Malloc2 00:27:17.150 Malloc3 00:27:17.150 Malloc4 00:27:17.150 Malloc5 00:27:17.150 Malloc6 00:27:17.150 Malloc7 00:27:17.150 Malloc8 00:27:17.411 Malloc9 00:27:17.411 Malloc10 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=429963 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 429963 /var/tmp/bdevperf.sock 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 429963 ']' 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:17.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.411 { 00:27:17.411 "params": { 00:27:17.411 "name": "Nvme$subsystem", 00:27:17.411 "trtype": "$TEST_TRANSPORT", 00:27:17.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.411 "adrfam": "ipv4", 00:27:17.411 "trsvcid": "$NVMF_PORT", 00:27:17.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.411 "hdgst": ${hdgst:-false}, 00:27:17.411 "ddgst": ${ddgst:-false} 00:27:17.411 }, 00:27:17.411 "method": "bdev_nvme_attach_controller" 00:27:17.411 } 00:27:17.411 EOF 00:27:17.411 )") 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.411 { 00:27:17.411 "params": { 00:27:17.411 "name": "Nvme$subsystem", 00:27:17.411 "trtype": "$TEST_TRANSPORT", 00:27:17.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.411 "adrfam": "ipv4", 00:27:17.411 "trsvcid": "$NVMF_PORT", 00:27:17.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.411 "hdgst": ${hdgst:-false}, 00:27:17.411 "ddgst": ${ddgst:-false} 00:27:17.411 }, 00:27:17.411 "method": "bdev_nvme_attach_controller" 00:27:17.411 } 00:27:17.411 EOF 00:27:17.411 )") 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.411 { 00:27:17.411 "params": { 00:27:17.411 "name": "Nvme$subsystem", 00:27:17.411 "trtype": "$TEST_TRANSPORT", 00:27:17.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.411 "adrfam": "ipv4", 00:27:17.411 "trsvcid": "$NVMF_PORT", 00:27:17.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.411 "hdgst": ${hdgst:-false}, 00:27:17.411 "ddgst": ${ddgst:-false} 00:27:17.411 }, 00:27:17.411 "method": "bdev_nvme_attach_controller" 00:27:17.411 } 00:27:17.411 EOF 00:27:17.411 )") 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.411 { 00:27:17.411 "params": { 00:27:17.411 "name": "Nvme$subsystem", 00:27:17.411 "trtype": "$TEST_TRANSPORT", 00:27:17.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.411 "adrfam": "ipv4", 00:27:17.411 "trsvcid": "$NVMF_PORT", 00:27:17.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.411 "hdgst": ${hdgst:-false}, 00:27:17.411 "ddgst": ${ddgst:-false} 00:27:17.411 }, 00:27:17.411 "method": "bdev_nvme_attach_controller" 00:27:17.411 } 00:27:17.411 EOF 00:27:17.411 )") 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.411 { 00:27:17.411 "params": { 00:27:17.411 "name": "Nvme$subsystem", 00:27:17.411 "trtype": "$TEST_TRANSPORT", 00:27:17.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.411 "adrfam": "ipv4", 00:27:17.411 "trsvcid": "$NVMF_PORT", 00:27:17.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.411 "hdgst": ${hdgst:-false}, 00:27:17.411 "ddgst": ${ddgst:-false} 00:27:17.411 }, 00:27:17.411 "method": "bdev_nvme_attach_controller" 00:27:17.411 } 00:27:17.411 EOF 00:27:17.411 )") 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.411 { 00:27:17.411 "params": { 00:27:17.411 "name": "Nvme$subsystem", 00:27:17.411 "trtype": "$TEST_TRANSPORT", 00:27:17.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.411 "adrfam": "ipv4", 00:27:17.411 "trsvcid": "$NVMF_PORT", 00:27:17.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.411 "hdgst": ${hdgst:-false}, 00:27:17.411 "ddgst": ${ddgst:-false} 00:27:17.411 }, 00:27:17.411 "method": "bdev_nvme_attach_controller" 00:27:17.411 } 00:27:17.411 EOF 00:27:17.411 )") 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.411 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.411 { 00:27:17.411 "params": { 00:27:17.411 "name": "Nvme$subsystem", 00:27:17.411 "trtype": "$TEST_TRANSPORT", 00:27:17.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.411 "adrfam": "ipv4", 00:27:17.411 "trsvcid": "$NVMF_PORT", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.412 "hdgst": ${hdgst:-false}, 00:27:17.412 "ddgst": ${ddgst:-false} 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 } 00:27:17.412 EOF 00:27:17.412 )") 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:17.412 [2024-12-10 00:08:52.276535] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:17.412 [2024-12-10 00:08:52.276586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429963 ] 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.412 { 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme$subsystem", 00:27:17.412 "trtype": "$TEST_TRANSPORT", 00:27:17.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "$NVMF_PORT", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.412 "hdgst": ${hdgst:-false}, 00:27:17.412 "ddgst": ${ddgst:-false} 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 } 00:27:17.412 EOF 00:27:17.412 )") 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.412 { 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme$subsystem", 00:27:17.412 "trtype": "$TEST_TRANSPORT", 00:27:17.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "$NVMF_PORT", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.412 "hdgst": ${hdgst:-false}, 00:27:17.412 "ddgst": ${ddgst:-false} 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 } 00:27:17.412 EOF 00:27:17.412 )") 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:17.412 { 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme$subsystem", 00:27:17.412 "trtype": "$TEST_TRANSPORT", 00:27:17.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "$NVMF_PORT", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:17.412 "hdgst": ${hdgst:-false}, 00:27:17.412 "ddgst": ${ddgst:-false} 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 } 00:27:17.412 EOF 00:27:17.412 )") 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:27:17.412 00:08:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme1", 00:27:17.412 "trtype": "tcp", 00:27:17.412 "traddr": "10.0.0.2", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "4420", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:17.412 "hdgst": false, 00:27:17.412 "ddgst": false 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 },{ 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme2", 00:27:17.412 "trtype": "tcp", 00:27:17.412 "traddr": "10.0.0.2", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "4420", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:17.412 "hdgst": false, 00:27:17.412 "ddgst": false 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 },{ 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme3", 00:27:17.412 "trtype": "tcp", 00:27:17.412 "traddr": "10.0.0.2", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "4420", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:17.412 "hdgst": false, 00:27:17.412 "ddgst": false 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 },{ 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme4", 00:27:17.412 "trtype": "tcp", 00:27:17.412 "traddr": "10.0.0.2", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "4420", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:17.412 "hdgst": false, 00:27:17.412 "ddgst": false 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 },{ 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme5", 00:27:17.412 "trtype": "tcp", 00:27:17.412 "traddr": "10.0.0.2", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "4420", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:17.412 "hdgst": false, 00:27:17.412 "ddgst": false 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 },{ 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme6", 00:27:17.412 "trtype": "tcp", 00:27:17.412 "traddr": "10.0.0.2", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "4420", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:17.412 "hdgst": false, 00:27:17.412 "ddgst": false 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 },{ 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme7", 00:27:17.412 "trtype": "tcp", 00:27:17.412 "traddr": "10.0.0.2", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "4420", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:17.412 "hdgst": false, 00:27:17.412 "ddgst": false 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 },{ 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme8", 00:27:17.412 "trtype": "tcp", 00:27:17.412 "traddr": "10.0.0.2", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "4420", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:17.412 "hdgst": false, 00:27:17.412 "ddgst": false 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 },{ 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme9", 00:27:17.412 "trtype": "tcp", 00:27:17.412 "traddr": "10.0.0.2", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "4420", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:17.412 "hdgst": false, 00:27:17.412 "ddgst": false 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 },{ 00:27:17.412 "params": { 00:27:17.412 "name": "Nvme10", 00:27:17.412 "trtype": "tcp", 00:27:17.412 "traddr": "10.0.0.2", 00:27:17.412 "adrfam": "ipv4", 00:27:17.412 "trsvcid": "4420", 00:27:17.412 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:17.412 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:17.412 "hdgst": false, 00:27:17.412 "ddgst": false 00:27:17.412 }, 00:27:17.412 "method": "bdev_nvme_attach_controller" 00:27:17.412 }' 00:27:17.672 [2024-12-10 00:08:52.354508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.672 [2024-12-10 00:08:52.395323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.587 Running I/O for 10 seconds... 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:19.587 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:19.860 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:19.860 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:19.860 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:19.860 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:19.860 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.860 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:19.860 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.860 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:19.860 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:19.860 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 429885 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 429885 ']' 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 429885 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429885 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429885' 00:27:19.861 killing process with pid 429885 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 429885 00:27:19.861 00:08:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 429885 00:27:19.861 [2024-12-10 00:08:54.704224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.704676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbfac0 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.706423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.706459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.706468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.706475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.706483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.706490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.706496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.706502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.706509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.861 [2024-12-10 00:08:54.706515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.706855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e33e30 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709389] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:19.862 [2024-12-10 00:08:54.709579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.862 [2024-12-10 00:08:54.709740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.709984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbffb0 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.710496] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:19.863 [2024-12-10 00:08:54.712457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712619] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:19.863 [2024-12-10 00:08:54.712633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.863 [2024-12-10 00:08:54.712865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.712870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.712876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.712882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0480 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.713859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.713883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.713893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.713900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.713908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.713915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.713923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.713930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.713937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19205b0 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.713990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.713999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee7e0 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with [2024-12-10 00:08:54.714096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:27:19.864 id:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b6e10 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-10 00:08:54.714220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:19.864 the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with [2024-12-10 00:08:54.714230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:27:19.864 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c2dd0 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-10 00:08:54.714333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:19.864 the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-10 00:08:54.714343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.864 [2024-12-10 00:08:54.714360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.864 [2024-12-10 00:08:54.714368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c2940 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.864 [2024-12-10 00:08:54.714444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.714546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc0970 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715649] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:19.865 [2024-12-10 00:08:54.715732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128[2024-12-10 00:08:54.715919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 00:08:54.715929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.715978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.715985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with [2024-12-10 00:08:54.715989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:27:19.865 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.715998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.716001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.716006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.716009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.716014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.716019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.716021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.716028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.716029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.716037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with [2024-12-10 00:08:54.716037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128the state(6) to be set 00:27:19.865 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.716045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.716046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.716052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.716056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.716062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.716064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.716069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.716074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.865 [2024-12-10 00:08:54.716076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.716081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.865 [2024-12-10 00:08:54.716083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.865 [2024-12-10 00:08:54.716091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with [2024-12-10 00:08:54.716091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12the state(6) to be set 00:27:19.865 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with [2024-12-10 00:08:54.716102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:27:19.866 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:12[2024-12-10 00:08:54.716208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 00:08:54.716218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:12[2024-12-10 00:08:54.716267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with [2024-12-10 00:08:54.716277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:27:19.866 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 00:08:54.716335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:12[2024-12-10 00:08:54.716381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 00:08:54.716391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1310 is same with the state(6) to be set 00:27:19.866 [2024-12-10 00:08:54.716410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.866 [2024-12-10 00:08:54.716607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.866 [2024-12-10 00:08:54.716616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.716815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.867 [2024-12-10 00:08:54.716821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.867 [2024-12-10 00:08:54.717531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.717943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc17e0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.718251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:27:19.867 [2024-12-10 00:08:54.718308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1ed0 (9): Bad file descriptor 00:27:19.867 [2024-12-10 00:08:54.718609] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:19.867 [2024-12-10 00:08:54.718742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.718756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.718764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.718770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.718776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.718782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.867 [2024-12-10 00:08:54.718788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.718918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.719119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.868 [2024-12-10 00:08:54.719142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1ed0 with addr=10.0.0.2, port=4420 00:27:19.868 [2024-12-10 00:08:54.719151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1ed0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.719232] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:19.868 [2024-12-10 00:08:54.719379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1ed0 (9): Bad file descriptor 00:27:19.868 [2024-12-10 00:08:54.719562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:27:19.868 [2024-12-10 00:08:54.719575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:27:19.868 [2024-12-10 00:08:54.719585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:27:19.868 [2024-12-10 00:08:54.719593] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:27:19.868 [2024-12-10 00:08:54.719651] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:19.868 [2024-12-10 00:08:54.723869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.723892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.723901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.723909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.723918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.723925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.723933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.723939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.723946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191c780 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.723975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19205b0 (9): Bad file descriptor 00:27:19.868 [2024-12-10 00:08:54.723994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ee7e0 (9): Bad file descriptor 00:27:19.868 [2024-12-10 00:08:54.724015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b6e10 (9): Bad file descriptor 00:27:19.868 [2024-12-10 00:08:54.724040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.724049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.724057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.724064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.724072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.724078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.724087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.724094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.724101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b73c0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.724124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.724133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.724140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.724147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.724154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.724166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.724173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.868 [2024-12-10 00:08:54.724180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.868 [2024-12-10 00:08:54.724187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b8ca0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.724202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c2dd0 (9): Bad file descriptor 00:27:19.868 [2024-12-10 00:08:54.724215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c2940 (9): Bad file descriptor 00:27:19.868 [2024-12-10 00:08:54.728672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:27:19.868 [2024-12-10 00:08:54.728872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.868 [2024-12-10 00:08:54.728887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1ed0 with addr=10.0.0.2, port=4420 00:27:19.868 [2024-12-10 00:08:54.728895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1ed0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.728930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1ed0 (9): Bad file descriptor 00:27:19.868 [2024-12-10 00:08:54.728965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:27:19.868 [2024-12-10 00:08:54.728977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:27:19.868 [2024-12-10 00:08:54.728985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:27:19.868 [2024-12-10 00:08:54.728992] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:27:19.868 [2024-12-10 00:08:54.731946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.731959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.731969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.731977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.731985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.731993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.868 [2024-12-10 00:08:54.732118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc1cd0 is same with the state(6) to be set 00:27:19.869 [2024-12-10 00:08:54.732420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.732989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.732996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.733004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.733011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.733020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.733027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.733036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.733043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.733051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.733058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.733067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.733073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.869 [2024-12-10 00:08:54.733082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.869 [2024-12-10 00:08:54.733089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.733443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.733451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2811ec0 is same with the state(6) to be set 00:27:19.870 [2024-12-10 00:08:54.734451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:27:19.870 [2024-12-10 00:08:54.734473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191c780 (9): Bad file descriptor 00:27:19.870 [2024-12-10 00:08:54.734511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.870 [2024-12-10 00:08:54.734521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.870 [2024-12-10 00:08:54.734536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.870 [2024-12-10 00:08:54.734550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:19.870 [2024-12-10 00:08:54.734564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19152a0 is same with the state(6) to be set 00:27:19.870 [2024-12-10 00:08:54.734603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b73c0 (9): Bad file descriptor 00:27:19.870 [2024-12-10 00:08:54.734619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b8ca0 (9): Bad file descriptor 00:27:19.870 [2024-12-10 00:08:54.734730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.734984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.734993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.735000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.735008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.735015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.735024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.735030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.735040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.735047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.735055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.735063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.870 [2024-12-10 00:08:54.735071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.870 [2024-12-10 00:08:54.735078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.735742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.735749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c6dd0 is same with the state(6) to be set 00:27:19.871 [2024-12-10 00:08:54.736738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.736749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.736760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.736767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.736776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.736783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.736793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.736800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.736809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.736815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.736823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.736831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.736839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.736845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.736854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.736861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.736868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.736876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.871 [2024-12-10 00:08:54.736884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.871 [2024-12-10 00:08:54.736890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.736904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.736910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.736920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.736927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.736936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.736942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.736951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.736959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.736968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.736976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.736984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.736991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.737402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.737409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.872 [2024-12-10 00:08:54.741901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.872 [2024-12-10 00:08:54.741909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.741916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.741927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.741933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.741943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.741950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.741958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.741966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.741974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.741982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.741989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c7d80 is same with the state(6) to be set 00:27:19.873 [2024-12-10 00:08:54.742974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.742987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.742997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.873 [2024-12-10 00:08:54.743753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.873 [2024-12-10 00:08:54.743760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.743991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.743998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.744006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.744014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.744021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c8e90 is same with the state(6) to be set 00:27:19.874 [2024-12-10 00:08:54.745015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.874 [2024-12-10 00:08:54.745660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.874 [2024-12-10 00:08:54.745667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.745991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.745999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.746007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.746015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.746023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.746031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.746038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.746047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.746054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.746062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c74e0 is same with the state(6) to be set 00:27:19.875 [2024-12-10 00:08:54.747299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.875 [2024-12-10 00:08:54.747764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.875 [2024-12-10 00:08:54.747771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.747990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.747998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.748379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.748387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1707f90 is same with the state(6) to be set 00:27:19.876 [2024-12-10 00:08:54.749392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:19.876 [2024-12-10 00:08:54.749414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:27:19.876 [2024-12-10 00:08:54.749425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:27:19.876 [2024-12-10 00:08:54.749436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:27:19.876 [2024-12-10 00:08:54.749717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.876 [2024-12-10 00:08:54.749735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191c780 with addr=10.0.0.2, port=4420 00:27:19.876 [2024-12-10 00:08:54.749751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191c780 is same with the state(6) to be set 00:27:19.876 [2024-12-10 00:08:54.749786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19152a0 (9): Bad file descriptor 00:27:19.876 [2024-12-10 00:08:54.749809] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:27:19.876 [2024-12-10 00:08:54.749838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191c780 (9): Bad file descriptor 00:27:19.876 [2024-12-10 00:08:54.749911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:27:19.876 [2024-12-10 00:08:54.750095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.876 [2024-12-10 00:08:54.750111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c2dd0 with addr=10.0.0.2, port=4420 00:27:19.876 [2024-12-10 00:08:54.750119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c2dd0 is same with the state(6) to be set 00:27:19.876 [2024-12-10 00:08:54.750259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.876 [2024-12-10 00:08:54.750274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b6e10 with addr=10.0.0.2, port=4420 00:27:19.876 [2024-12-10 00:08:54.750283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b6e10 is same with the state(6) to be set 00:27:19.876 [2024-12-10 00:08:54.750408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.876 [2024-12-10 00:08:54.750420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c2940 with addr=10.0.0.2, port=4420 00:27:19.876 [2024-12-10 00:08:54.750429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c2940 is same with the state(6) to be set 00:27:19.876 [2024-12-10 00:08:54.750548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.876 [2024-12-10 00:08:54.750561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ee7e0 with addr=10.0.0.2, port=4420 00:27:19.876 [2024-12-10 00:08:54.750569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee7e0 is same with the state(6) to be set 00:27:19.876 [2024-12-10 00:08:54.751501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.751520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.751532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.751541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.751551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.751559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.751568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.751575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.751584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.751591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.751600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.751612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.751621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.751628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.751637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.751644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.751653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.751660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.751669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.751676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.876 [2024-12-10 00:08:54.751685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.876 [2024-12-10 00:08:54.751692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.751982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.751990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.752564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.752572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9b00 is same with the state(6) to be set 00:27:19.877 [2024-12-10 00:08:54.753602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.753615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.753626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.753634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.753643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.877 [2024-12-10 00:08:54.753651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.877 [2024-12-10 00:08:54.753661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.753986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.753994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.878 [2024-12-10 00:08:54.754572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.878 [2024-12-10 00:08:54.754578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.879 [2024-12-10 00:08:54.754588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.879 [2024-12-10 00:08:54.754595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.879 [2024-12-10 00:08:54.754604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.879 [2024-12-10 00:08:54.754611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.879 [2024-12-10 00:08:54.754620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.879 [2024-12-10 00:08:54.754627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.879 [2024-12-10 00:08:54.754637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.879 [2024-12-10 00:08:54.754644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.879 [2024-12-10 00:08:54.754653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.879 [2024-12-10 00:08:54.754660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.879 [2024-12-10 00:08:54.754669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.879 [2024-12-10 00:08:54.754676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:19.879 [2024-12-10 00:08:54.754685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25c4670 is same with the state(6) to be set 00:27:19.879 [2024-12-10 00:08:54.755922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:27:19.879 [2024-12-10 00:08:54.755943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:27:19.879 task offset: 23552 on job bdev=Nvme5n1 fails 00:27:19.879 00:27:19.879 Latency(us) 00:27:19.879 [2024-12-09T23:08:54.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.879 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.879 Job: Nvme1n1 ended in about 0.64 seconds with error 00:27:19.879 Verification LBA range: start 0x0 length 0x400 00:27:19.879 Nvme1n1 : 0.64 200.83 12.55 100.42 0.00 209285.42 29633.67 227039.50 00:27:19.879 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.879 Job: Nvme2n1 ended in about 0.64 seconds with error 00:27:19.879 Verification LBA range: start 0x0 length 0x400 00:27:19.879 Nvme2n1 : 0.64 99.44 6.22 99.44 0.00 309153.17 16298.52 253481.85 00:27:19.879 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.879 Job: Nvme3n1 ended in about 0.65 seconds with error 00:27:19.879 Verification LBA range: start 0x0 length 0x400 00:27:19.879 Nvme3n1 : 0.65 198.26 12.39 99.13 0.00 201392.90 15044.79 209715.20 00:27:19.879 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.879 Job: Nvme4n1 ended in about 0.65 seconds with error 00:27:19.879 Verification LBA range: start 0x0 length 0x400 00:27:19.879 Nvme4n1 : 0.65 197.64 12.35 98.82 0.00 196750.47 16070.57 201508.95 00:27:19.879 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.879 Job: Nvme5n1 ended in about 0.62 seconds with error 00:27:19.879 Verification LBA range: start 0x0 length 0x400 00:27:19.879 Nvme5n1 : 0.62 206.85 12.93 103.43 0.00 181839.17 1966.08 217009.64 00:27:19.879 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.879 Job: Nvme6n1 ended in about 0.65 seconds with error 00:27:19.879 Verification LBA range: start 0x0 length 0x400 00:27:19.879 Nvme6n1 : 0.65 201.79 12.61 97.84 0.00 184433.25 15158.76 211538.81 00:27:19.879 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.879 Job: Nvme7n1 ended in about 0.66 seconds with error 00:27:19.879 Verification LBA range: start 0x0 length 0x400 00:27:19.879 Nvme7n1 : 0.66 195.05 12.19 97.52 0.00 183746.56 26556.33 198773.54 00:27:19.879 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.879 Job: Nvme8n1 ended in about 0.64 seconds with error 00:27:19.879 Verification LBA range: start 0x0 length 0x400 00:27:19.879 Nvme8n1 : 0.64 201.53 12.60 100.77 0.00 171428.29 18578.03 209715.20 00:27:19.879 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.879 Verification LBA range: start 0x0 length 0x400 00:27:19.879 Nvme9n1 : 0.62 205.64 12.85 0.00 0.00 242968.93 18008.15 242540.19 00:27:19.879 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.879 Job: Nvme10n1 ended in about 0.65 seconds with error 00:27:19.879 Verification LBA range: start 0x0 length 0x400 00:27:19.879 Nvme10n1 : 0.65 98.47 6.15 98.47 0.00 249012.76 32369.09 222480.47 00:27:19.879 [2024-12-09T23:08:54.815Z] =================================================================================================================== 00:27:19.879 [2024-12-09T23:08:54.815Z] Total : 1805.52 112.85 895.84 0.00 206944.11 1966.08 253481.85 00:27:20.140 [2024-12-10 00:08:54.785433] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:20.140 [2024-12-10 00:08:54.785478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:27:20.140 [2024-12-10 00:08:54.785790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-10 00:08:54.785811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19205b0 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-10 00:08:54.785823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19205b0 is same with the state(6) to be set 00:27:20.140 [2024-12-10 00:08:54.785839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c2dd0 (9): Bad file descriptor 00:27:20.140 [2024-12-10 00:08:54.785851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b6e10 (9): Bad file descriptor 00:27:20.140 [2024-12-10 00:08:54.785861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c2940 (9): Bad file descriptor 00:27:20.140 [2024-12-10 00:08:54.785875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ee7e0 (9): Bad file descriptor 00:27:20.140 [2024-12-10 00:08:54.785884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:27:20.140 [2024-12-10 00:08:54.785892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:27:20.140 [2024-12-10 00:08:54.785901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:27:20.140 [2024-12-10 00:08:54.785910] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:27:20.140 [2024-12-10 00:08:54.786349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-10 00:08:54.786370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c1ed0 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-10 00:08:54.786380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c1ed0 is same with the state(6) to be set 00:27:20.140 [2024-12-10 00:08:54.786623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-10 00:08:54.786635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b73c0 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-10 00:08:54.786642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b73c0 is same with the state(6) to be set 00:27:20.140 [2024-12-10 00:08:54.786833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-10 00:08:54.786845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b8ca0 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-10 00:08:54.786852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b8ca0 is same with the state(6) to be set 00:27:20.140 [2024-12-10 00:08:54.786864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19205b0 (9): Bad file descriptor 00:27:20.140 [2024-12-10 00:08:54.786875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:20.140 [2024-12-10 00:08:54.786882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:20.140 [2024-12-10 00:08:54.786889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:20.140 [2024-12-10 00:08:54.786897] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:27:20.140 [2024-12-10 00:08:54.786906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:27:20.140 [2024-12-10 00:08:54.786913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:27:20.140 [2024-12-10 00:08:54.786920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:27:20.140 [2024-12-10 00:08:54.786927] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:27:20.140 [2024-12-10 00:08:54.786934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:27:20.140 [2024-12-10 00:08:54.786940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:27:20.140 [2024-12-10 00:08:54.786947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:20.140 [2024-12-10 00:08:54.786953] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:27:20.140 [2024-12-10 00:08:54.786961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:27:20.140 [2024-12-10 00:08:54.786967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:27:20.140 [2024-12-10 00:08:54.786978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:27:20.140 [2024-12-10 00:08:54.786984] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:27:20.140 [2024-12-10 00:08:54.787040] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:27:20.140 [2024-12-10 00:08:54.787590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c1ed0 (9): Bad file descriptor 00:27:20.140 [2024-12-10 00:08:54.787604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b73c0 (9): Bad file descriptor 00:27:20.140 [2024-12-10 00:08:54.787613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b8ca0 (9): Bad file descriptor 00:27:20.140 [2024-12-10 00:08:54.787621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:27:20.140 [2024-12-10 00:08:54.787627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:27:20.140 [2024-12-10 00:08:54.787634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:27:20.140 [2024-12-10 00:08:54.787641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:27:20.140 [2024-12-10 00:08:54.787683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:27:20.140 [2024-12-10 00:08:54.787694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:27:20.140 [2024-12-10 00:08:54.787703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:27:20.140 [2024-12-10 00:08:54.787712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:27:20.140 [2024-12-10 00:08:54.787720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:27:20.140 [2024-12-10 00:08:54.787728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:20.140 [2024-12-10 00:08:54.787766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:27:20.140 [2024-12-10 00:08:54.787774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:27:20.140 [2024-12-10 00:08:54.787781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:27:20.140 [2024-12-10 00:08:54.787787] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:27:20.140 [2024-12-10 00:08:54.787794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:27:20.140 [2024-12-10 00:08:54.787800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:27:20.140 [2024-12-10 00:08:54.787806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:27:20.140 [2024-12-10 00:08:54.787813] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:27:20.140 [2024-12-10 00:08:54.787820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:27:20.140 [2024-12-10 00:08:54.787827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:27:20.140 [2024-12-10 00:08:54.787835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:20.140 [2024-12-10 00:08:54.787841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:27:20.140 [2024-12-10 00:08:54.788016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-10 00:08:54.788033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19152a0 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-10 00:08:54.788042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19152a0 is same with the state(6) to be set 00:27:20.140 [2024-12-10 00:08:54.788263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-10 00:08:54.788275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191c780 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-10 00:08:54.788283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x191c780 is same with the state(6) to be set 00:27:20.140 [2024-12-10 00:08:54.788501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-10 00:08:54.788511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18ee7e0 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-10 00:08:54.788519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18ee7e0 is same with the state(6) to be set 00:27:20.140 [2024-12-10 00:08:54.788744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-10 00:08:54.788754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c2940 with addr=10.0.0.2, port=4420 00:27:20.140 [2024-12-10 00:08:54.788762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c2940 is same with the state(6) to be set 00:27:20.140 [2024-12-10 00:08:54.788833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.140 [2024-12-10 00:08:54.788843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b6e10 with addr=10.0.0.2, port=4420 00:27:20.141 [2024-12-10 00:08:54.788850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b6e10 is same with the state(6) to be set 00:27:20.141 [2024-12-10 00:08:54.788972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.141 [2024-12-10 00:08:54.788983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c2dd0 with addr=10.0.0.2, port=4420 00:27:20.141 [2024-12-10 00:08:54.788991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14c2dd0 is same with the state(6) to be set 00:27:20.141 [2024-12-10 00:08:54.789022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19152a0 (9): Bad file descriptor 00:27:20.141 [2024-12-10 00:08:54.789032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x191c780 (9): Bad file descriptor 00:27:20.141 [2024-12-10 00:08:54.789042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ee7e0 (9): Bad file descriptor 00:27:20.141 [2024-12-10 00:08:54.789051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c2940 (9): Bad file descriptor 00:27:20.141 [2024-12-10 00:08:54.789060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b6e10 (9): Bad file descriptor 00:27:20.141 [2024-12-10 00:08:54.789068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c2dd0 (9): Bad file descriptor 00:27:20.141 [2024-12-10 00:08:54.789093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:27:20.141 [2024-12-10 00:08:54.789101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:27:20.141 [2024-12-10 00:08:54.789108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:27:20.141 [2024-12-10 00:08:54.789116] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:27:20.141 [2024-12-10 00:08:54.789123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:27:20.141 [2024-12-10 00:08:54.789129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:27:20.141 [2024-12-10 00:08:54.789139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:27:20.141 [2024-12-10 00:08:54.789145] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:27:20.141 [2024-12-10 00:08:54.789155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:27:20.141 [2024-12-10 00:08:54.789166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:27:20.141 [2024-12-10 00:08:54.789173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:27:20.141 [2024-12-10 00:08:54.789179] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:27:20.141 [2024-12-10 00:08:54.789186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:27:20.141 [2024-12-10 00:08:54.789192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:27:20.141 [2024-12-10 00:08:54.789198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:20.141 [2024-12-10 00:08:54.789204] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:27:20.141 [2024-12-10 00:08:54.789212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:27:20.141 [2024-12-10 00:08:54.789218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:27:20.141 [2024-12-10 00:08:54.789225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:27:20.141 [2024-12-10 00:08:54.789231] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:27:20.141 [2024-12-10 00:08:54.789238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:20.141 [2024-12-10 00:08:54.789244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:20.141 [2024-12-10 00:08:54.789250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:20.141 [2024-12-10 00:08:54.789256] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:27:20.402 00:08:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 429963 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 429963 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 429963 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:21.343 rmmod nvme_tcp 00:27:21.343 rmmod nvme_fabrics 00:27:21.343 rmmod nvme_keyring 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 429885 ']' 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 429885 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 429885 ']' 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 429885 00:27:21.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (429885) - No such process 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 429885 is not found' 00:27:21.343 Process with pid 429885 is not found 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.343 00:08:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:23.883 00:27:23.883 real 0m7.204s 00:27:23.883 user 0m16.768s 00:27:23.883 sys 0m1.256s 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:23.883 ************************************ 00:27:23.883 END TEST nvmf_shutdown_tc3 00:27:23.883 ************************************ 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:23.883 ************************************ 00:27:23.883 START TEST nvmf_shutdown_tc4 00:27:23.883 ************************************ 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:23.883 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:23.884 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:23.884 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:23.884 Found net devices under 0000:86:00.0: cvl_0_0 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:23.884 Found net devices under 0000:86:00.1: cvl_0_1 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:23.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:27:23.884 00:27:23.884 --- 10.0.0.2 ping statistics --- 00:27:23.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.884 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:27:23.884 00:27:23.884 --- 10.0.0.1 ping statistics --- 00:27:23.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.884 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=431203 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 431203 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 431203 ']' 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.884 00:08:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:23.884 [2024-12-10 00:08:58.724602] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:23.884 [2024-12-10 00:08:58.724652] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.884 [2024-12-10 00:08:58.803367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.144 [2024-12-10 00:08:58.845591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.144 [2024-12-10 00:08:58.845625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.144 [2024-12-10 00:08:58.845632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.144 [2024-12-10 00:08:58.845638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.144 [2024-12-10 00:08:58.845643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.144 [2024-12-10 00:08:58.850177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.144 [2024-12-10 00:08:58.850266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:24.144 [2024-12-10 00:08:58.850365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:24.144 [2024-12-10 00:08:58.850366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:24.712 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.712 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:27:24.712 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:24.712 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:24.712 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:24.712 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.712 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:24.713 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.713 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:24.713 [2024-12-10 00:08:59.630620] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.713 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.713 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:24.713 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:24.713 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:24.713 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:24.713 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:27:24.713 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.972 00:08:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:24.972 Malloc1 00:27:24.972 [2024-12-10 00:08:59.732473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.972 Malloc2 00:27:24.972 Malloc3 00:27:24.972 Malloc4 00:27:24.972 Malloc5 00:27:25.231 Malloc6 00:27:25.231 Malloc7 00:27:25.231 Malloc8 00:27:25.231 Malloc9 00:27:25.231 Malloc10 00:27:25.231 00:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.231 00:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:25.231 00:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:25.231 00:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:25.231 00:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=431507 00:27:25.231 00:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:27:25.231 00:09:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:27:25.490 [2024-12-10 00:09:00.237384] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 431203 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 431203 ']' 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 431203 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 431203 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 431203' 00:27:30.779 killing process with pid 431203 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 431203 00:27:30.779 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 431203 00:27:30.779 [2024-12-10 00:09:05.230796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892740 is same with the state(6) to be set 00:27:30.779 [2024-12-10 00:09:05.230841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892740 is same with the state(6) to be set 00:27:30.779 [2024-12-10 00:09:05.230855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892740 is same with the state(6) to be set 00:27:30.779 [2024-12-10 00:09:05.230862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892740 is same with the state(6) to be set 00:27:30.779 [2024-12-10 00:09:05.230868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892740 is same with the state(6) to be set 00:27:30.779 [2024-12-10 00:09:05.230874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x892740 is same with the state(6) to be set 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 starting I/O failed: -6 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 starting I/O failed: -6 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 starting I/O failed: -6 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 starting I/O failed: -6 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 starting I/O failed: -6 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 starting I/O failed: -6 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.779 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 [2024-12-10 00:09:05.234154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeee50 is same with the state(6) to be set 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 [2024-12-10 00:09:05.234203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeee50 is same with the state(6) to be set 00:27:30.780 [2024-12-10 00:09:05.234240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 [2024-12-10 00:09:05.235216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.780 starting I/O failed: -6 00:27:30.780 starting I/O failed: -6 00:27:30.780 starting I/O failed: -6 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.780 starting I/O failed: -6 00:27:30.780 Write completed with error (sct=0, sc=8) 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 [2024-12-10 00:09:05.236435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 [2024-12-10 00:09:05.236780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897ee0 is same with tWrite completed with error (sct=0, sc=8) 00:27:30.781 he state(6) to be set 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 [2024-12-10 00:09:05.236816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897ee0 is same with the state(6) to be set 00:27:30.781 starting I/O failed: -6 00:27:30.781 [2024-12-10 00:09:05.236824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897ee0 is same with the state(6) to be set 00:27:30.781 [2024-12-10 00:09:05.236832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897ee0 is same with the state(6) to be set 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 [2024-12-10 00:09:05.236838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897ee0 is same with the state(6) to be set 00:27:30.781 starting I/O failed: -6 00:27:30.781 [2024-12-10 00:09:05.236846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897ee0 is same with the state(6) to be set 00:27:30.781 [2024-12-10 00:09:05.236853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897ee0 is same with the state(6) to be set 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 [2024-12-10 00:09:05.236859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897ee0 is same with the state(6) to be set 00:27:30.781 starting I/O failed: -6 00:27:30.781 [2024-12-10 00:09:05.236866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897ee0 is same with the state(6) to be set 00:27:30.781 [2024-12-10 00:09:05.236873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897ee0 is same with the state(6) to be set 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.781 starting I/O failed: -6 00:27:30.781 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 [2024-12-10 00:09:05.237273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8983b0 is same with tWrite completed with error (sct=0, sc=8) 00:27:30.782 he state(6) to be set 00:27:30.782 starting I/O failed: -6 00:27:30.782 [2024-12-10 00:09:05.237305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8983b0 is same with tWrite completed with error (sct=0, sc=8) 00:27:30.782 he state(6) to be set 00:27:30.782 starting I/O failed: -6 00:27:30.782 [2024-12-10 00:09:05.237319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8983b0 is same with the state(6) to be set 00:27:30.782 [2024-12-10 00:09:05.237327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8983b0 is same with the state(6) to be set 00:27:30.782 [2024-12-10 00:09:05.237333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8983b0 is same with tWrite completed with error (sct=0, sc=8) 00:27:30.782 he state(6) to be set 00:27:30.782 starting I/O failed: -6 00:27:30.782 [2024-12-10 00:09:05.237340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8983b0 is same with the state(6) to be set 00:27:30.782 [2024-12-10 00:09:05.237347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8983b0 is same with the state(6) to be set 00:27:30.782 [2024-12-10 00:09:05.237354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8983b0 is same with tWrite completed with error (sct=0, sc=8) 00:27:30.782 he state(6) to be set 00:27:30.782 starting I/O failed: -6 00:27:30.782 [2024-12-10 00:09:05.237362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8983b0 is same with the state(6) to be set 00:27:30.782 [2024-12-10 00:09:05.237368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8983b0 is same with the state(6) to be set 00:27:30.782 [2024-12-10 00:09:05.237374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8983b0 is same with the state(6) to be set 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 starting I/O failed: -6 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 [2024-12-10 00:09:05.237656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeead0 is same with tstarting I/O failed: -6 00:27:30.782 he state(6) to be set 00:27:30.782 [2024-12-10 00:09:05.237678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeead0 is same with the state(6) to be set 00:27:30.782 Write completed with error (sct=0, sc=8) 00:27:30.782 [2024-12-10 00:09:05.237685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeead0 is same with the state(6) to be set 00:27:30.782 starting I/O failed: -6 00:27:30.784 [2024-12-10 00:09:05.237692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeead0 is same with the state(6) to be set 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 [2024-12-10 00:09:05.237699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeead0 is same with the state(6) to be set 00:27:30.784 starting I/O failed: -6 00:27:30.784 [2024-12-10 00:09:05.237706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeead0 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.237712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeead0 is same with the state(6) to be set 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 [2024-12-10 00:09:05.237720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeead0 is same with the state(6) to be set 00:27:30.784 starting I/O failed: -6 00:27:30.784 [2024-12-10 00:09:05.237727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeead0 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.237734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaeead0 is same with the state(6) to be set 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 [2024-12-10 00:09:05.238044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x897a10 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.238198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.784 NVMe io qpair process completion error 00:27:30.784 [2024-12-10 00:09:05.240695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0660 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.240720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0660 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.240728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0660 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.240735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0660 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.240742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0660 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.240748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0660 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.240754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0660 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.240760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0660 is same with the state(6) to be set 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 [2024-12-10 00:09:05.241538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.241560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with tWrite completed with error (sct=0, sc=8) 00:27:30.784 he state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.241568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(6) to be set 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 [2024-12-10 00:09:05.241576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.241583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.241590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf0190 is same with the state(6) to be set 00:27:30.784 [2024-12-10 00:09:05.241602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 [2024-12-10 00:09:05.242468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.784 starting I/O failed: -6 00:27:30.784 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 [2024-12-10 00:09:05.243546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 [2024-12-10 00:09:05.245163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.785 NVMe io qpair process completion error 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 [2024-12-10 00:09:05.246325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 [2024-12-10 00:09:05.247123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 Write completed with error (sct=0, sc=8) 00:27:30.785 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 [2024-12-10 00:09:05.248153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.786 Write completed with error (sct=0, sc=8) 00:27:30.786 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 [2024-12-10 00:09:05.249765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.790 NVMe io qpair process completion error 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 Write completed with error (sct=0, sc=8) 00:27:30.790 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 [2024-12-10 00:09:05.250830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.791 starting I/O failed: -6 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 [2024-12-10 00:09:05.251734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 [2024-12-10 00:09:05.252757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 starting I/O failed: -6 00:27:30.791 [2024-12-10 00:09:05.254900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.791 NVMe io qpair process completion error 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.791 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 [2024-12-10 00:09:05.255823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 [2024-12-10 00:09:05.256754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 [2024-12-10 00:09:05.257757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.792 Write completed with error (sct=0, sc=8) 00:27:30.792 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 [2024-12-10 00:09:05.263434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.793 NVMe io qpair process completion error 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 [2024-12-10 00:09:05.264419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 [2024-12-10 00:09:05.265231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.793 starting I/O failed: -6 00:27:30.793 starting I/O failed: -6 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 [2024-12-10 00:09:05.266455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.793 starting I/O failed: -6 00:27:30.793 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 [2024-12-10 00:09:05.269594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.794 NVMe io qpair process completion error 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 [2024-12-10 00:09:05.270633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 starting I/O failed: -6 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.794 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 [2024-12-10 00:09:05.271523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 starting I/O failed: -6 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.795 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 [2024-12-10 00:09:05.272559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.799 Write completed with error (sct=0, sc=8) 00:27:30.799 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 [2024-12-10 00:09:05.274189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.800 NVMe io qpair process completion error 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 starting I/O failed: -6 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.800 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 [2024-12-10 00:09:05.275136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 [2024-12-10 00:09:05.276054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.801 Write completed with error (sct=0, sc=8) 00:27:30.801 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 [2024-12-10 00:09:05.277120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.802 starting I/O failed: -6 00:27:30.802 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 [2024-12-10 00:09:05.283454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.803 NVMe io qpair process completion error 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.803 [2024-12-10 00:09:05.284386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.803 Write completed with error (sct=0, sc=8) 00:27:30.803 starting I/O failed: -6 00:27:30.806 Write completed with error (sct=0, sc=8) 00:27:30.806 starting I/O failed: -6 00:27:30.806 Write completed with error (sct=0, sc=8) 00:27:30.806 Write completed with error (sct=0, sc=8) 00:27:30.806 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 [2024-12-10 00:09:05.285191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 Write completed with error (sct=0, sc=8) 00:27:30.807 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 [2024-12-10 00:09:05.286237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.808 starting I/O failed: -6 00:27:30.808 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 [2024-12-10 00:09:05.290928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:30.809 NVMe io qpair process completion error 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.809 starting I/O failed: -6 00:27:30.809 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 [2024-12-10 00:09:05.293418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.810 Write completed with error (sct=0, sc=8) 00:27:30.810 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 Write completed with error (sct=0, sc=8) 00:27:30.811 starting I/O failed: -6 00:27:30.811 [2024-12-10 00:09:05.296002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:30.811 NVMe io qpair process completion error 00:27:30.811 Initializing NVMe Controllers 00:27:30.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:27:30.811 Controller IO queue size 128, less than required. 00:27:30.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:30.811 Controller IO queue size 128, less than required. 00:27:30.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:27:30.811 Controller IO queue size 128, less than required. 00:27:30.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:27:30.811 Controller IO queue size 128, less than required. 00:27:30.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:27:30.811 Controller IO queue size 128, less than required. 00:27:30.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:27:30.811 Controller IO queue size 128, less than required. 00:27:30.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:27:30.811 Controller IO queue size 128, less than required. 00:27:30.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:27:30.811 Controller IO queue size 128, less than required. 00:27:30.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:27:30.811 Controller IO queue size 128, less than required. 00:27:30.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:27:30.811 Controller IO queue size 128, less than required. 00:27:30.811 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:27:30.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:30.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:27:30.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:27:30.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:27:30.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:27:30.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:27:30.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:27:30.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:27:30.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:27:30.811 Initialization complete. Launching workers. 00:27:30.811 ======================================================== 00:27:30.811 Latency(us) 00:27:30.811 Device Information : IOPS MiB/s Average min max 00:27:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2187.72 94.00 58514.42 806.99 110438.61 00:27:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2169.06 93.20 59030.18 724.75 135174.05 00:27:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2172.13 93.33 58964.62 696.21 108354.73 00:27:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2176.96 93.54 58897.32 700.52 111325.74 00:27:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2167.96 93.15 59171.75 741.08 115324.68 00:27:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2150.18 92.39 59673.63 675.53 118210.73 00:27:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2160.49 92.83 59458.80 693.36 103452.18 00:27:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2100.33 90.25 60446.25 726.45 101983.45 00:27:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2191.89 94.18 58607.28 609.51 128029.68 00:27:30.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2168.40 93.17 58544.80 874.74 101453.54 00:27:30.811 ======================================================== 00:27:30.811 Total : 21645.12 930.06 59124.95 609.51 135174.05 00:27:30.811 00:27:30.811 [2024-12-10 00:09:05.299024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120c740 is same with the state(6) to be set 00:27:30.811 [2024-12-10 00:09:05.299070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120d720 is same with the state(6) to be set 00:27:30.811 [2024-12-10 00:09:05.299100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120ca70 is same with the state(6) to be set 00:27:30.811 [2024-12-10 00:09:05.299129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120b890 is same with the state(6) to be set 00:27:30.811 [2024-12-10 00:09:05.299163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120bef0 is same with the state(6) to be set 00:27:30.811 [2024-12-10 00:09:05.299191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120c410 is same with the state(6) to be set 00:27:30.811 [2024-12-10 00:09:05.299220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120b560 is same with the state(6) to be set 00:27:30.811 [2024-12-10 00:09:05.299250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120d900 is same with the state(6) to be set 00:27:30.811 [2024-12-10 00:09:05.299279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120bbc0 is same with the state(6) to be set 00:27:30.811 [2024-12-10 00:09:05.299308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120dae0 is same with the state(6) to be set 00:27:30.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:30.811 00:09:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 431507 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 431507 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 431507 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:31.750 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:31.750 rmmod nvme_tcp 00:27:31.750 rmmod nvme_fabrics 00:27:31.750 rmmod nvme_keyring 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 431203 ']' 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 431203 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 431203 ']' 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 431203 00:27:32.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (431203) - No such process 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 431203 is not found' 00:27:32.010 Process with pid 431203 is not found 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.010 00:09:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.917 00:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:33.917 00:27:33.917 real 0m10.430s 00:27:33.917 user 0m27.536s 00:27:33.917 sys 0m5.312s 00:27:33.917 00:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:33.917 00:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:33.917 ************************************ 00:27:33.917 END TEST nvmf_shutdown_tc4 00:27:33.917 ************************************ 00:27:33.917 00:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:27:33.917 00:27:33.917 real 0m41.954s 00:27:33.917 user 1m45.133s 00:27:33.917 sys 0m14.180s 00:27:33.917 00:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:33.917 00:09:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:33.917 ************************************ 00:27:33.917 END TEST nvmf_shutdown 00:27:33.917 ************************************ 00:27:34.178 00:09:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:27:34.178 00:09:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:34.178 00:09:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:34.178 00:09:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:34.178 ************************************ 00:27:34.178 START TEST nvmf_nsid 00:27:34.178 ************************************ 00:27:34.178 00:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:27:34.178 * Looking for test storage... 00:27:34.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:27:34.178 00:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:34.178 00:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:27:34.178 00:09:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:34.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.178 --rc genhtml_branch_coverage=1 00:27:34.178 --rc genhtml_function_coverage=1 00:27:34.178 --rc genhtml_legend=1 00:27:34.178 --rc geninfo_all_blocks=1 00:27:34.178 --rc geninfo_unexecuted_blocks=1 00:27:34.178 00:27:34.178 ' 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:34.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.178 --rc genhtml_branch_coverage=1 00:27:34.178 --rc genhtml_function_coverage=1 00:27:34.178 --rc genhtml_legend=1 00:27:34.178 --rc geninfo_all_blocks=1 00:27:34.178 --rc geninfo_unexecuted_blocks=1 00:27:34.178 00:27:34.178 ' 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:34.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.178 --rc genhtml_branch_coverage=1 00:27:34.178 --rc genhtml_function_coverage=1 00:27:34.178 --rc genhtml_legend=1 00:27:34.178 --rc geninfo_all_blocks=1 00:27:34.178 --rc geninfo_unexecuted_blocks=1 00:27:34.178 00:27:34.178 ' 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:34.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:34.178 --rc genhtml_branch_coverage=1 00:27:34.178 --rc genhtml_function_coverage=1 00:27:34.178 --rc genhtml_legend=1 00:27:34.178 --rc geninfo_all_blocks=1 00:27:34.178 --rc geninfo_unexecuted_blocks=1 00:27:34.178 00:27:34.178 ' 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.178 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:34.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.179 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.438 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:34.438 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:34.438 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:27:34.438 00:09:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:41.010 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.010 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.010 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.010 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.010 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.010 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.010 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.010 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:41.011 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:41.011 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:41.011 Found net devices under 0000:86:00.0: cvl_0_0 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:41.011 Found net devices under 0000:86:00.1: cvl_0_1 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:41.011 00:09:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:41.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:27:41.011 00:27:41.011 --- 10.0.0.2 ping statistics --- 00:27:41.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.011 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:41.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:27:41.011 00:27:41.011 --- 10.0.0.1 ping statistics --- 00:27:41.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.011 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.011 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=436463 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 436463 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 436463 ']' 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:41.012 [2024-12-10 00:09:15.140277] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:41.012 [2024-12-10 00:09:15.140330] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.012 [2024-12-10 00:09:15.222806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.012 [2024-12-10 00:09:15.267151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.012 [2024-12-10 00:09:15.267191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.012 [2024-12-10 00:09:15.267200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.012 [2024-12-10 00:09:15.267206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.012 [2024-12-10 00:09:15.267212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.012 [2024-12-10 00:09:15.267671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=436592 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=10f7d486-c782-49ca-b735-5f3921584b7e 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=447a2f7e-9a10-486c-876b-f92ed9f38511 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=6e1addb1-72f4-4614-8489-02b56f6ba8e4 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:41.012 null0 00:27:41.012 null1 00:27:41.012 null2 00:27:41.012 [2024-12-10 00:09:15.465927] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:41.012 [2024-12-10 00:09:15.465972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436592 ] 00:27:41.012 [2024-12-10 00:09:15.468985] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.012 [2024-12-10 00:09:15.493184] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 436592 /var/tmp/tgt2.sock 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 436592 ']' 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:27:41.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:41.012 [2024-12-10 00:09:15.539907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.012 [2024-12-10 00:09:15.582128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:27:41.012 00:09:15 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:27:41.272 [2024-12-10 00:09:16.106128] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.272 [2024-12-10 00:09:16.122234] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:27:41.272 nvme0n1 nvme0n2 00:27:41.272 nvme1n1 00:27:41.272 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:27:41.272 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:27:41.272 00:09:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:27:42.648 00:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 10f7d486-c782-49ca-b735-5f3921584b7e 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=10f7d486c78249cab7355f3921584b7e 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 10F7D486C78249CAB7355F3921584B7E 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 10F7D486C78249CAB7355F3921584B7E == \1\0\F\7\D\4\8\6\C\7\8\2\4\9\C\A\B\7\3\5\5\F\3\9\2\1\5\8\4\B\7\E ]] 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 447a2f7e-9a10-486c-876b-f92ed9f38511 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=447a2f7e9a10486c876bf92ed9f38511 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 447A2F7E9A10486C876BF92ED9F38511 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 447A2F7E9A10486C876BF92ED9F38511 == \4\4\7\A\2\F\7\E\9\A\1\0\4\8\6\C\8\7\6\B\F\9\2\E\D\9\F\3\8\5\1\1 ]] 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 6e1addb1-72f4-4614-8489-02b56f6ba8e4 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6e1addb172f44614848902b56f6ba8e4 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6E1ADDB172F44614848902B56F6BA8E4 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 6E1ADDB172F44614848902B56F6BA8E4 == \6\E\1\A\D\D\B\1\7\2\F\4\4\6\1\4\8\4\8\9\0\2\B\5\6\F\6\B\A\8\E\4 ]] 00:27:43.587 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 436592 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 436592 ']' 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 436592 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 436592 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 436592' 00:27:43.847 killing process with pid 436592 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 436592 00:27:43.847 00:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 436592 00:27:44.106 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:27:44.106 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:44.106 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:27:44.106 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:44.106 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:27:44.106 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:44.106 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:44.364 rmmod nvme_tcp 00:27:44.364 rmmod nvme_fabrics 00:27:44.364 rmmod nvme_keyring 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 436463 ']' 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 436463 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 436463 ']' 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 436463 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 436463 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 436463' 00:27:44.364 killing process with pid 436463 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 436463 00:27:44.364 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 436463 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.624 00:09:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.531 00:09:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:46.531 00:27:46.531 real 0m12.487s 00:27:46.531 user 0m9.769s 00:27:46.531 sys 0m5.477s 00:27:46.531 00:09:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:46.531 00:09:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:27:46.531 ************************************ 00:27:46.531 END TEST nvmf_nsid 00:27:46.531 ************************************ 00:27:46.531 00:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:46.531 00:27:46.531 real 12m6.689s 00:27:46.531 user 26m4.830s 00:27:46.531 sys 3m36.443s 00:27:46.531 00:09:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:46.531 00:09:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:46.531 ************************************ 00:27:46.531 END TEST nvmf_target_extra 00:27:46.531 ************************************ 00:27:46.531 00:09:21 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:46.531 00:09:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:46.531 00:09:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:46.531 00:09:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:46.791 ************************************ 00:27:46.791 START TEST nvmf_host 00:27:46.791 ************************************ 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:46.791 * Looking for test storage... 00:27:46.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:46.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.791 --rc genhtml_branch_coverage=1 00:27:46.791 --rc genhtml_function_coverage=1 00:27:46.791 --rc genhtml_legend=1 00:27:46.791 --rc geninfo_all_blocks=1 00:27:46.791 --rc geninfo_unexecuted_blocks=1 00:27:46.791 00:27:46.791 ' 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:46.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.791 --rc genhtml_branch_coverage=1 00:27:46.791 --rc genhtml_function_coverage=1 00:27:46.791 --rc genhtml_legend=1 00:27:46.791 --rc geninfo_all_blocks=1 00:27:46.791 --rc geninfo_unexecuted_blocks=1 00:27:46.791 00:27:46.791 ' 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:46.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.791 --rc genhtml_branch_coverage=1 00:27:46.791 --rc genhtml_function_coverage=1 00:27:46.791 --rc genhtml_legend=1 00:27:46.791 --rc geninfo_all_blocks=1 00:27:46.791 --rc geninfo_unexecuted_blocks=1 00:27:46.791 00:27:46.791 ' 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:46.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.791 --rc genhtml_branch_coverage=1 00:27:46.791 --rc genhtml_function_coverage=1 00:27:46.791 --rc genhtml_legend=1 00:27:46.791 --rc geninfo_all_blocks=1 00:27:46.791 --rc geninfo_unexecuted_blocks=1 00:27:46.791 00:27:46.791 ' 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.791 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:46.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:46.792 00:09:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.053 ************************************ 00:27:47.053 START TEST nvmf_multicontroller 00:27:47.053 ************************************ 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:47.053 * Looking for test storage... 00:27:47.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:47.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.053 --rc genhtml_branch_coverage=1 00:27:47.053 --rc genhtml_function_coverage=1 00:27:47.053 --rc genhtml_legend=1 00:27:47.053 --rc geninfo_all_blocks=1 00:27:47.053 --rc geninfo_unexecuted_blocks=1 00:27:47.053 00:27:47.053 ' 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:47.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.053 --rc genhtml_branch_coverage=1 00:27:47.053 --rc genhtml_function_coverage=1 00:27:47.053 --rc genhtml_legend=1 00:27:47.053 --rc geninfo_all_blocks=1 00:27:47.053 --rc geninfo_unexecuted_blocks=1 00:27:47.053 00:27:47.053 ' 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:47.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.053 --rc genhtml_branch_coverage=1 00:27:47.053 --rc genhtml_function_coverage=1 00:27:47.053 --rc genhtml_legend=1 00:27:47.053 --rc geninfo_all_blocks=1 00:27:47.053 --rc geninfo_unexecuted_blocks=1 00:27:47.053 00:27:47.053 ' 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:47.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:47.053 --rc genhtml_branch_coverage=1 00:27:47.053 --rc genhtml_function_coverage=1 00:27:47.053 --rc genhtml_legend=1 00:27:47.053 --rc geninfo_all_blocks=1 00:27:47.053 --rc geninfo_unexecuted_blocks=1 00:27:47.053 00:27:47.053 ' 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.053 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:47.054 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:27:47.054 00:09:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:53.629 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:53.629 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:53.629 Found net devices under 0000:86:00.0: cvl_0_0 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.629 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:53.630 Found net devices under 0000:86:00.1: cvl_0_1 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:53.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:27:53.630 00:27:53.630 --- 10.0.0.2 ping statistics --- 00:27:53.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.630 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:27:53.630 00:27:53.630 --- 10.0.0.1 ping statistics --- 00:27:53.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.630 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=440804 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 440804 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 440804 ']' 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.630 00:09:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.630 [2024-12-10 00:09:27.892850] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:53.630 [2024-12-10 00:09:27.892895] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.630 [2024-12-10 00:09:27.954066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:53.630 [2024-12-10 00:09:27.995862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.630 [2024-12-10 00:09:27.995895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.630 [2024-12-10 00:09:27.995902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.630 [2024-12-10 00:09:27.995908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.630 [2024-12-10 00:09:27.995913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.630 [2024-12-10 00:09:27.997321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.630 [2024-12-10 00:09:27.997429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.630 [2024-12-10 00:09:27.997429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.630 [2024-12-10 00:09:28.134820] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.630 Malloc0 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.630 [2024-12-10 00:09:28.199392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.630 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.631 [2024-12-10 00:09:28.211341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.631 Malloc1 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=440879 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 440879 /var/tmp/bdevperf.sock 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 440879 ']' 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.631 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.891 NVMe0n1 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.891 1 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.891 request: 00:27:53.891 { 00:27:53.891 "name": "NVMe0", 00:27:53.891 "trtype": "tcp", 00:27:53.891 "traddr": "10.0.0.2", 00:27:53.891 "adrfam": "ipv4", 00:27:53.891 "trsvcid": "4420", 00:27:53.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.891 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:53.891 "hostaddr": "10.0.0.1", 00:27:53.891 "prchk_reftag": false, 00:27:53.891 "prchk_guard": false, 00:27:53.891 "hdgst": false, 00:27:53.891 "ddgst": false, 00:27:53.891 "allow_unrecognized_csi": false, 00:27:53.891 "method": "bdev_nvme_attach_controller", 00:27:53.891 "req_id": 1 00:27:53.891 } 00:27:53.891 Got JSON-RPC error response 00:27:53.891 response: 00:27:53.891 { 00:27:53.891 "code": -114, 00:27:53.891 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:53.891 } 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.891 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.891 request: 00:27:53.891 { 00:27:53.891 "name": "NVMe0", 00:27:53.891 "trtype": "tcp", 00:27:53.891 "traddr": "10.0.0.2", 00:27:53.891 "adrfam": "ipv4", 00:27:53.891 "trsvcid": "4420", 00:27:53.891 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.891 "hostaddr": "10.0.0.1", 00:27:53.891 "prchk_reftag": false, 00:27:53.891 "prchk_guard": false, 00:27:53.891 "hdgst": false, 00:27:53.891 "ddgst": false, 00:27:53.891 "allow_unrecognized_csi": false, 00:27:53.892 "method": "bdev_nvme_attach_controller", 00:27:53.892 "req_id": 1 00:27:53.892 } 00:27:53.892 Got JSON-RPC error response 00:27:53.892 response: 00:27:53.892 { 00:27:53.892 "code": -114, 00:27:53.892 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:53.892 } 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.892 request: 00:27:53.892 { 00:27:53.892 "name": "NVMe0", 00:27:53.892 "trtype": "tcp", 00:27:53.892 "traddr": "10.0.0.2", 00:27:53.892 "adrfam": "ipv4", 00:27:53.892 "trsvcid": "4420", 00:27:53.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.892 "hostaddr": "10.0.0.1", 00:27:53.892 "prchk_reftag": false, 00:27:53.892 "prchk_guard": false, 00:27:53.892 "hdgst": false, 00:27:53.892 "ddgst": false, 00:27:53.892 "multipath": "disable", 00:27:53.892 "allow_unrecognized_csi": false, 00:27:53.892 "method": "bdev_nvme_attach_controller", 00:27:53.892 "req_id": 1 00:27:53.892 } 00:27:53.892 Got JSON-RPC error response 00:27:53.892 response: 00:27:53.892 { 00:27:53.892 "code": -114, 00:27:53.892 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:27:53.892 } 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:53.892 request: 00:27:53.892 { 00:27:53.892 "name": "NVMe0", 00:27:53.892 "trtype": "tcp", 00:27:53.892 "traddr": "10.0.0.2", 00:27:53.892 "adrfam": "ipv4", 00:27:53.892 "trsvcid": "4420", 00:27:53.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.892 "hostaddr": "10.0.0.1", 00:27:53.892 "prchk_reftag": false, 00:27:53.892 "prchk_guard": false, 00:27:53.892 "hdgst": false, 00:27:53.892 "ddgst": false, 00:27:53.892 "multipath": "failover", 00:27:53.892 "allow_unrecognized_csi": false, 00:27:53.892 "method": "bdev_nvme_attach_controller", 00:27:53.892 "req_id": 1 00:27:53.892 } 00:27:53.892 Got JSON-RPC error response 00:27:53.892 response: 00:27:53.892 { 00:27:53.892 "code": -114, 00:27:53.892 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:53.892 } 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.892 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.151 NVMe0n1 00:27:54.151 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.151 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:54.151 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.151 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.151 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.151 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:54.151 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.151 00:09:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.151 00:27:54.151 00:09:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.151 00:09:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:54.151 00:09:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:54.151 00:09:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.151 00:09:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:54.151 00:09:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.151 00:09:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:54.151 00:09:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:55.529 { 00:27:55.529 "results": [ 00:27:55.529 { 00:27:55.529 "job": "NVMe0n1", 00:27:55.529 "core_mask": "0x1", 00:27:55.530 "workload": "write", 00:27:55.530 "status": "finished", 00:27:55.530 "queue_depth": 128, 00:27:55.530 "io_size": 4096, 00:27:55.530 "runtime": 1.008071, 00:27:55.530 "iops": 24319.715575589416, 00:27:55.530 "mibps": 94.99888896714616, 00:27:55.530 "io_failed": 0, 00:27:55.530 "io_timeout": 0, 00:27:55.530 "avg_latency_us": 5256.837554321224, 00:27:55.530 "min_latency_us": 3105.8365217391306, 00:27:55.530 "max_latency_us": 12765.27304347826 00:27:55.530 } 00:27:55.530 ], 00:27:55.530 "core_count": 1 00:27:55.530 } 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 440879 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 440879 ']' 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 440879 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 440879 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 440879' 00:27:55.530 killing process with pid 440879 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 440879 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 440879 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt -type f 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:27:55.530 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt --- 00:27:55.530 [2024-12-10 00:09:28.310836] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:55.530 [2024-12-10 00:09:28.310886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440879 ] 00:27:55.530 [2024-12-10 00:09:28.386522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.530 [2024-12-10 00:09:28.428290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.530 [2024-12-10 00:09:29.046447] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name a4b011c6-924a-4282-96ab-65b63820f983 already exists 00:27:55.530 [2024-12-10 00:09:29.046474] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:a4b011c6-924a-4282-96ab-65b63820f983 alias for bdev NVMe1n1 00:27:55.530 [2024-12-10 00:09:29.046483] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:55.530 Running I/O for 1 seconds... 00:27:55.530 24261.00 IOPS, 94.77 MiB/s 00:27:55.530 Latency(us) 00:27:55.530 [2024-12-09T23:09:30.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.530 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:55.530 NVMe0n1 : 1.01 24319.72 95.00 0.00 0.00 5256.84 3105.84 12765.27 00:27:55.530 [2024-12-09T23:09:30.466Z] =================================================================================================================== 00:27:55.530 [2024-12-09T23:09:30.466Z] Total : 24319.72 95.00 0.00 0.00 5256.84 3105.84 12765.27 00:27:55.530 Received shutdown signal, test time was about 1.000000 seconds 00:27:55.530 00:27:55.530 Latency(us) 00:27:55.530 [2024-12-09T23:09:30.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.530 [2024-12-09T23:09:30.466Z] =================================================================================================================== 00:27:55.530 [2024-12-09T23:09:30.466Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:55.530 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt --- 00:27:55.530 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:55.790 rmmod nvme_tcp 00:27:55.790 rmmod nvme_fabrics 00:27:55.790 rmmod nvme_keyring 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 440804 ']' 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 440804 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 440804 ']' 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 440804 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 440804 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 440804' 00:27:55.790 killing process with pid 440804 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 440804 00:27:55.790 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 440804 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.049 00:09:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.961 00:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:57.961 00:27:57.961 real 0m11.137s 00:27:57.961 user 0m12.373s 00:27:57.961 sys 0m5.126s 00:27:57.961 00:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:57.961 00:09:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:57.961 ************************************ 00:27:57.961 END TEST nvmf_multicontroller 00:27:57.961 ************************************ 00:27:58.221 00:09:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:58.221 00:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:58.221 00:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:58.221 00:09:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.221 ************************************ 00:27:58.221 START TEST nvmf_aer 00:27:58.221 ************************************ 00:27:58.221 00:09:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:58.221 * Looking for test storage... 00:27:58.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:27:58.221 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:58.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.222 --rc genhtml_branch_coverage=1 00:27:58.222 --rc genhtml_function_coverage=1 00:27:58.222 --rc genhtml_legend=1 00:27:58.222 --rc geninfo_all_blocks=1 00:27:58.222 --rc geninfo_unexecuted_blocks=1 00:27:58.222 00:27:58.222 ' 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:58.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.222 --rc genhtml_branch_coverage=1 00:27:58.222 --rc genhtml_function_coverage=1 00:27:58.222 --rc genhtml_legend=1 00:27:58.222 --rc geninfo_all_blocks=1 00:27:58.222 --rc geninfo_unexecuted_blocks=1 00:27:58.222 00:27:58.222 ' 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:58.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.222 --rc genhtml_branch_coverage=1 00:27:58.222 --rc genhtml_function_coverage=1 00:27:58.222 --rc genhtml_legend=1 00:27:58.222 --rc geninfo_all_blocks=1 00:27:58.222 --rc geninfo_unexecuted_blocks=1 00:27:58.222 00:27:58.222 ' 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:58.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.222 --rc genhtml_branch_coverage=1 00:27:58.222 --rc genhtml_function_coverage=1 00:27:58.222 --rc genhtml_legend=1 00:27:58.222 --rc geninfo_all_blocks=1 00:27:58.222 --rc geninfo_unexecuted_blocks=1 00:27:58.222 00:27:58.222 ' 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:58.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.222 00:09:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:04.793 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:04.793 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:04.793 Found net devices under 0000:86:00.0: cvl_0_0 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:04.793 Found net devices under 0000:86:00.1: cvl_0_1 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:04.793 00:09:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:04.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:28:04.794 00:28:04.794 --- 10.0.0.2 ping statistics --- 00:28:04.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.794 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:28:04.794 00:28:04.794 --- 10.0.0.1 ping statistics --- 00:28:04.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.794 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=444822 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 444822 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 444822 ']' 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.794 [2024-12-10 00:09:39.217974] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:04.794 [2024-12-10 00:09:39.218022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.794 [2024-12-10 00:09:39.297064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.794 [2024-12-10 00:09:39.338367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.794 [2024-12-10 00:09:39.338405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.794 [2024-12-10 00:09:39.338412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.794 [2024-12-10 00:09:39.338418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.794 [2024-12-10 00:09:39.338423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.794 [2024-12-10 00:09:39.339842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.794 [2024-12-10 00:09:39.339949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.794 [2024-12-10 00:09:39.340057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.794 [2024-12-10 00:09:39.340057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.794 [2024-12-10 00:09:39.486319] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.794 Malloc0 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.794 [2024-12-10 00:09:39.547108] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:04.794 [ 00:28:04.794 { 00:28:04.794 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:04.794 "subtype": "Discovery", 00:28:04.794 "listen_addresses": [], 00:28:04.794 "allow_any_host": true, 00:28:04.794 "hosts": [] 00:28:04.794 }, 00:28:04.794 { 00:28:04.794 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.794 "subtype": "NVMe", 00:28:04.794 "listen_addresses": [ 00:28:04.794 { 00:28:04.794 "trtype": "TCP", 00:28:04.794 "adrfam": "IPv4", 00:28:04.794 "traddr": "10.0.0.2", 00:28:04.794 "trsvcid": "4420" 00:28:04.794 } 00:28:04.794 ], 00:28:04.794 "allow_any_host": true, 00:28:04.794 "hosts": [], 00:28:04.794 "serial_number": "SPDK00000000000001", 00:28:04.794 "model_number": "SPDK bdev Controller", 00:28:04.794 "max_namespaces": 2, 00:28:04.794 "min_cntlid": 1, 00:28:04.794 "max_cntlid": 65519, 00:28:04.794 "namespaces": [ 00:28:04.794 { 00:28:04.794 "nsid": 1, 00:28:04.794 "bdev_name": "Malloc0", 00:28:04.794 "name": "Malloc0", 00:28:04.794 "nguid": "31D58957AA7644DA8147B2E8D723C22E", 00:28:04.794 "uuid": "31d58957-aa76-44da-8147-b2e8d723c22e" 00:28:04.794 } 00:28:04.794 ] 00:28:04.794 } 00:28:04.794 ] 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=444846 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:28:04.794 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.054 Malloc1 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.054 Asynchronous Event Request test 00:28:05.054 Attaching to 10.0.0.2 00:28:05.054 Attached to 10.0.0.2 00:28:05.054 Registering asynchronous event callbacks... 00:28:05.054 Starting namespace attribute notice tests for all controllers... 00:28:05.054 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:05.054 aer_cb - Changed Namespace 00:28:05.054 Cleaning up... 00:28:05.054 [ 00:28:05.054 { 00:28:05.054 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:05.054 "subtype": "Discovery", 00:28:05.054 "listen_addresses": [], 00:28:05.054 "allow_any_host": true, 00:28:05.054 "hosts": [] 00:28:05.054 }, 00:28:05.054 { 00:28:05.054 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.054 "subtype": "NVMe", 00:28:05.054 "listen_addresses": [ 00:28:05.054 { 00:28:05.054 "trtype": "TCP", 00:28:05.054 "adrfam": "IPv4", 00:28:05.054 "traddr": "10.0.0.2", 00:28:05.054 "trsvcid": "4420" 00:28:05.054 } 00:28:05.054 ], 00:28:05.054 "allow_any_host": true, 00:28:05.054 "hosts": [], 00:28:05.054 "serial_number": "SPDK00000000000001", 00:28:05.054 "model_number": "SPDK bdev Controller", 00:28:05.054 "max_namespaces": 2, 00:28:05.054 "min_cntlid": 1, 00:28:05.054 "max_cntlid": 65519, 00:28:05.054 "namespaces": [ 00:28:05.054 { 00:28:05.054 "nsid": 1, 00:28:05.054 "bdev_name": "Malloc0", 00:28:05.054 "name": "Malloc0", 00:28:05.054 "nguid": "31D58957AA7644DA8147B2E8D723C22E", 00:28:05.054 "uuid": "31d58957-aa76-44da-8147-b2e8d723c22e" 00:28:05.054 }, 00:28:05.054 { 00:28:05.054 "nsid": 2, 00:28:05.054 "bdev_name": "Malloc1", 00:28:05.054 "name": "Malloc1", 00:28:05.054 "nguid": "BA210F9A41D547C89C33E39A5F884D0C", 00:28:05.054 "uuid": "ba210f9a-41d5-47c8-9c33-e39a5f884d0c" 00:28:05.054 } 00:28:05.054 ] 00:28:05.054 } 00:28:05.054 ] 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 444846 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.054 rmmod nvme_tcp 00:28:05.054 rmmod nvme_fabrics 00:28:05.054 rmmod nvme_keyring 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 444822 ']' 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 444822 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 444822 ']' 00:28:05.054 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 444822 00:28:05.314 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:28:05.314 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.314 00:09:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 444822 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 444822' 00:28:05.314 killing process with pid 444822 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 444822 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 444822 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.314 00:09:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:07.851 00:28:07.851 real 0m9.346s 00:28:07.851 user 0m5.275s 00:28:07.851 sys 0m4.820s 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:07.851 ************************************ 00:28:07.851 END TEST nvmf_aer 00:28:07.851 ************************************ 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.851 ************************************ 00:28:07.851 START TEST nvmf_async_init 00:28:07.851 ************************************ 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:07.851 * Looking for test storage... 00:28:07.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:07.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.851 --rc genhtml_branch_coverage=1 00:28:07.851 --rc genhtml_function_coverage=1 00:28:07.851 --rc genhtml_legend=1 00:28:07.851 --rc geninfo_all_blocks=1 00:28:07.851 --rc geninfo_unexecuted_blocks=1 00:28:07.851 00:28:07.851 ' 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:07.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.851 --rc genhtml_branch_coverage=1 00:28:07.851 --rc genhtml_function_coverage=1 00:28:07.851 --rc genhtml_legend=1 00:28:07.851 --rc geninfo_all_blocks=1 00:28:07.851 --rc geninfo_unexecuted_blocks=1 00:28:07.851 00:28:07.851 ' 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:07.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.851 --rc genhtml_branch_coverage=1 00:28:07.851 --rc genhtml_function_coverage=1 00:28:07.851 --rc genhtml_legend=1 00:28:07.851 --rc geninfo_all_blocks=1 00:28:07.851 --rc geninfo_unexecuted_blocks=1 00:28:07.851 00:28:07.851 ' 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:07.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:07.851 --rc genhtml_branch_coverage=1 00:28:07.851 --rc genhtml_function_coverage=1 00:28:07.851 --rc genhtml_legend=1 00:28:07.851 --rc geninfo_all_blocks=1 00:28:07.851 --rc geninfo_unexecuted_blocks=1 00:28:07.851 00:28:07.851 ' 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.851 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:07.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=60c1454d662842a7980f03f292858e64 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:28:07.852 00:09:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:14.425 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:14.425 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:14.425 Found net devices under 0000:86:00.0: cvl_0_0 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:14.425 Found net devices under 0000:86:00.1: cvl_0_1 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:14.425 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:14.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:28:14.426 00:28:14.426 --- 10.0.0.2 ping statistics --- 00:28:14.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.426 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:28:14.426 00:28:14.426 --- 10.0.0.1 ping statistics --- 00:28:14.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.426 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=448450 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 448450 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 448450 ']' 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.426 [2024-12-10 00:09:48.544744] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:14.426 [2024-12-10 00:09:48.544786] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.426 [2024-12-10 00:09:48.625824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.426 [2024-12-10 00:09:48.665718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.426 [2024-12-10 00:09:48.665757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.426 [2024-12-10 00:09:48.665764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.426 [2024-12-10 00:09:48.665773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.426 [2024-12-10 00:09:48.665778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.426 [2024-12-10 00:09:48.666332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.426 [2024-12-10 00:09:48.801920] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.426 null0 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 60c1454d662842a7980f03f292858e64 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.426 [2024-12-10 00:09:48.854188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.426 00:09:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.426 nvme0n1 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.427 [ 00:28:14.427 { 00:28:14.427 "name": "nvme0n1", 00:28:14.427 "aliases": [ 00:28:14.427 "60c1454d-6628-42a7-980f-03f292858e64" 00:28:14.427 ], 00:28:14.427 "product_name": "NVMe disk", 00:28:14.427 "block_size": 512, 00:28:14.427 "num_blocks": 2097152, 00:28:14.427 "uuid": "60c1454d-6628-42a7-980f-03f292858e64", 00:28:14.427 "numa_id": 1, 00:28:14.427 "assigned_rate_limits": { 00:28:14.427 "rw_ios_per_sec": 0, 00:28:14.427 "rw_mbytes_per_sec": 0, 00:28:14.427 "r_mbytes_per_sec": 0, 00:28:14.427 "w_mbytes_per_sec": 0 00:28:14.427 }, 00:28:14.427 "claimed": false, 00:28:14.427 "zoned": false, 00:28:14.427 "supported_io_types": { 00:28:14.427 "read": true, 00:28:14.427 "write": true, 00:28:14.427 "unmap": false, 00:28:14.427 "flush": true, 00:28:14.427 "reset": true, 00:28:14.427 "nvme_admin": true, 00:28:14.427 "nvme_io": true, 00:28:14.427 "nvme_io_md": false, 00:28:14.427 "write_zeroes": true, 00:28:14.427 "zcopy": false, 00:28:14.427 "get_zone_info": false, 00:28:14.427 "zone_management": false, 00:28:14.427 "zone_append": false, 00:28:14.427 "compare": true, 00:28:14.427 "compare_and_write": true, 00:28:14.427 "abort": true, 00:28:14.427 "seek_hole": false, 00:28:14.427 "seek_data": false, 00:28:14.427 "copy": true, 00:28:14.427 "nvme_iov_md": false 00:28:14.427 }, 00:28:14.427 "memory_domains": [ 00:28:14.427 { 00:28:14.427 "dma_device_id": "system", 00:28:14.427 "dma_device_type": 1 00:28:14.427 } 00:28:14.427 ], 00:28:14.427 "driver_specific": { 00:28:14.427 "nvme": [ 00:28:14.427 { 00:28:14.427 "trid": { 00:28:14.427 "trtype": "TCP", 00:28:14.427 "adrfam": "IPv4", 00:28:14.427 "traddr": "10.0.0.2", 00:28:14.427 "trsvcid": "4420", 00:28:14.427 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:14.427 }, 00:28:14.427 "ctrlr_data": { 00:28:14.427 "cntlid": 1, 00:28:14.427 "vendor_id": "0x8086", 00:28:14.427 "model_number": "SPDK bdev Controller", 00:28:14.427 "serial_number": "00000000000000000000", 00:28:14.427 "firmware_revision": "25.01", 00:28:14.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.427 "oacs": { 00:28:14.427 "security": 0, 00:28:14.427 "format": 0, 00:28:14.427 "firmware": 0, 00:28:14.427 "ns_manage": 0 00:28:14.427 }, 00:28:14.427 "multi_ctrlr": true, 00:28:14.427 "ana_reporting": false 00:28:14.427 }, 00:28:14.427 "vs": { 00:28:14.427 "nvme_version": "1.3" 00:28:14.427 }, 00:28:14.427 "ns_data": { 00:28:14.427 "id": 1, 00:28:14.427 "can_share": true 00:28:14.427 } 00:28:14.427 } 00:28:14.427 ], 00:28:14.427 "mp_policy": "active_passive" 00:28:14.427 } 00:28:14.427 } 00:28:14.427 ] 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.427 [2024-12-10 00:09:49.118724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:14.427 [2024-12-10 00:09:49.118779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570c00 (9): Bad file descriptor 00:28:14.427 [2024-12-10 00:09:49.250238] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.427 [ 00:28:14.427 { 00:28:14.427 "name": "nvme0n1", 00:28:14.427 "aliases": [ 00:28:14.427 "60c1454d-6628-42a7-980f-03f292858e64" 00:28:14.427 ], 00:28:14.427 "product_name": "NVMe disk", 00:28:14.427 "block_size": 512, 00:28:14.427 "num_blocks": 2097152, 00:28:14.427 "uuid": "60c1454d-6628-42a7-980f-03f292858e64", 00:28:14.427 "numa_id": 1, 00:28:14.427 "assigned_rate_limits": { 00:28:14.427 "rw_ios_per_sec": 0, 00:28:14.427 "rw_mbytes_per_sec": 0, 00:28:14.427 "r_mbytes_per_sec": 0, 00:28:14.427 "w_mbytes_per_sec": 0 00:28:14.427 }, 00:28:14.427 "claimed": false, 00:28:14.427 "zoned": false, 00:28:14.427 "supported_io_types": { 00:28:14.427 "read": true, 00:28:14.427 "write": true, 00:28:14.427 "unmap": false, 00:28:14.427 "flush": true, 00:28:14.427 "reset": true, 00:28:14.427 "nvme_admin": true, 00:28:14.427 "nvme_io": true, 00:28:14.427 "nvme_io_md": false, 00:28:14.427 "write_zeroes": true, 00:28:14.427 "zcopy": false, 00:28:14.427 "get_zone_info": false, 00:28:14.427 "zone_management": false, 00:28:14.427 "zone_append": false, 00:28:14.427 "compare": true, 00:28:14.427 "compare_and_write": true, 00:28:14.427 "abort": true, 00:28:14.427 "seek_hole": false, 00:28:14.427 "seek_data": false, 00:28:14.427 "copy": true, 00:28:14.427 "nvme_iov_md": false 00:28:14.427 }, 00:28:14.427 "memory_domains": [ 00:28:14.427 { 00:28:14.427 "dma_device_id": "system", 00:28:14.427 "dma_device_type": 1 00:28:14.427 } 00:28:14.427 ], 00:28:14.427 "driver_specific": { 00:28:14.427 "nvme": [ 00:28:14.427 { 00:28:14.427 "trid": { 00:28:14.427 "trtype": "TCP", 00:28:14.427 "adrfam": "IPv4", 00:28:14.427 "traddr": "10.0.0.2", 00:28:14.427 "trsvcid": "4420", 00:28:14.427 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:14.427 }, 00:28:14.427 "ctrlr_data": { 00:28:14.427 "cntlid": 2, 00:28:14.427 "vendor_id": "0x8086", 00:28:14.427 "model_number": "SPDK bdev Controller", 00:28:14.427 "serial_number": "00000000000000000000", 00:28:14.427 "firmware_revision": "25.01", 00:28:14.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.427 "oacs": { 00:28:14.427 "security": 0, 00:28:14.427 "format": 0, 00:28:14.427 "firmware": 0, 00:28:14.427 "ns_manage": 0 00:28:14.427 }, 00:28:14.427 "multi_ctrlr": true, 00:28:14.427 "ana_reporting": false 00:28:14.427 }, 00:28:14.427 "vs": { 00:28:14.427 "nvme_version": "1.3" 00:28:14.427 }, 00:28:14.427 "ns_data": { 00:28:14.427 "id": 1, 00:28:14.427 "can_share": true 00:28:14.427 } 00:28:14.427 } 00:28:14.427 ], 00:28:14.427 "mp_policy": "active_passive" 00:28:14.427 } 00:28:14.427 } 00:28:14.427 ] 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5VjFiB4Cp5 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5VjFiB4Cp5 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.5VjFiB4Cp5 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.427 [2024-12-10 00:09:49.323351] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:14.427 [2024-12-10 00:09:49.323449] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.427 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.427 [2024-12-10 00:09:49.343419] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:14.687 nvme0n1 00:28:14.687 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.687 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:14.687 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.687 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.687 [ 00:28:14.687 { 00:28:14.687 "name": "nvme0n1", 00:28:14.687 "aliases": [ 00:28:14.687 "60c1454d-6628-42a7-980f-03f292858e64" 00:28:14.687 ], 00:28:14.687 "product_name": "NVMe disk", 00:28:14.687 "block_size": 512, 00:28:14.687 "num_blocks": 2097152, 00:28:14.687 "uuid": "60c1454d-6628-42a7-980f-03f292858e64", 00:28:14.687 "numa_id": 1, 00:28:14.687 "assigned_rate_limits": { 00:28:14.687 "rw_ios_per_sec": 0, 00:28:14.687 "rw_mbytes_per_sec": 0, 00:28:14.687 "r_mbytes_per_sec": 0, 00:28:14.687 "w_mbytes_per_sec": 0 00:28:14.687 }, 00:28:14.687 "claimed": false, 00:28:14.687 "zoned": false, 00:28:14.687 "supported_io_types": { 00:28:14.687 "read": true, 00:28:14.687 "write": true, 00:28:14.687 "unmap": false, 00:28:14.687 "flush": true, 00:28:14.687 "reset": true, 00:28:14.687 "nvme_admin": true, 00:28:14.687 "nvme_io": true, 00:28:14.687 "nvme_io_md": false, 00:28:14.687 "write_zeroes": true, 00:28:14.687 "zcopy": false, 00:28:14.687 "get_zone_info": false, 00:28:14.687 "zone_management": false, 00:28:14.687 "zone_append": false, 00:28:14.687 "compare": true, 00:28:14.687 "compare_and_write": true, 00:28:14.687 "abort": true, 00:28:14.687 "seek_hole": false, 00:28:14.687 "seek_data": false, 00:28:14.687 "copy": true, 00:28:14.687 "nvme_iov_md": false 00:28:14.687 }, 00:28:14.687 "memory_domains": [ 00:28:14.687 { 00:28:14.687 "dma_device_id": "system", 00:28:14.687 "dma_device_type": 1 00:28:14.687 } 00:28:14.687 ], 00:28:14.687 "driver_specific": { 00:28:14.687 "nvme": [ 00:28:14.688 { 00:28:14.688 "trid": { 00:28:14.688 "trtype": "TCP", 00:28:14.688 "adrfam": "IPv4", 00:28:14.688 "traddr": "10.0.0.2", 00:28:14.688 "trsvcid": "4421", 00:28:14.688 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:14.688 }, 00:28:14.688 "ctrlr_data": { 00:28:14.688 "cntlid": 3, 00:28:14.688 "vendor_id": "0x8086", 00:28:14.688 "model_number": "SPDK bdev Controller", 00:28:14.688 "serial_number": "00000000000000000000", 00:28:14.688 "firmware_revision": "25.01", 00:28:14.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.688 "oacs": { 00:28:14.688 "security": 0, 00:28:14.688 "format": 0, 00:28:14.688 "firmware": 0, 00:28:14.688 "ns_manage": 0 00:28:14.688 }, 00:28:14.688 "multi_ctrlr": true, 00:28:14.688 "ana_reporting": false 00:28:14.688 }, 00:28:14.688 "vs": { 00:28:14.688 "nvme_version": "1.3" 00:28:14.688 }, 00:28:14.688 "ns_data": { 00:28:14.688 "id": 1, 00:28:14.688 "can_share": true 00:28:14.688 } 00:28:14.688 } 00:28:14.688 ], 00:28:14.688 "mp_policy": "active_passive" 00:28:14.688 } 00:28:14.688 } 00:28:14.688 ] 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.5VjFiB4Cp5 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:14.688 rmmod nvme_tcp 00:28:14.688 rmmod nvme_fabrics 00:28:14.688 rmmod nvme_keyring 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 448450 ']' 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 448450 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 448450 ']' 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 448450 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 448450 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 448450' 00:28:14.688 killing process with pid 448450 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 448450 00:28:14.688 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 448450 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.948 00:09:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.854 00:09:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:16.854 00:28:16.854 real 0m9.421s 00:28:16.854 user 0m3.098s 00:28:16.854 sys 0m4.751s 00:28:16.854 00:09:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.854 00:09:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:16.854 ************************************ 00:28:16.854 END TEST nvmf_async_init 00:28:16.854 ************************************ 00:28:17.113 00:09:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:17.113 00:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:17.113 00:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.113 00:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.113 ************************************ 00:28:17.113 START TEST dma 00:28:17.113 ************************************ 00:28:17.113 00:09:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:17.113 * Looking for test storage... 00:28:17.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:28:17.113 00:09:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:17.113 00:09:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:28:17.113 00:09:51 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:17.113 00:09:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:17.113 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.113 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:17.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.114 --rc genhtml_branch_coverage=1 00:28:17.114 --rc genhtml_function_coverage=1 00:28:17.114 --rc genhtml_legend=1 00:28:17.114 --rc geninfo_all_blocks=1 00:28:17.114 --rc geninfo_unexecuted_blocks=1 00:28:17.114 00:28:17.114 ' 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:17.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.114 --rc genhtml_branch_coverage=1 00:28:17.114 --rc genhtml_function_coverage=1 00:28:17.114 --rc genhtml_legend=1 00:28:17.114 --rc geninfo_all_blocks=1 00:28:17.114 --rc geninfo_unexecuted_blocks=1 00:28:17.114 00:28:17.114 ' 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:17.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.114 --rc genhtml_branch_coverage=1 00:28:17.114 --rc genhtml_function_coverage=1 00:28:17.114 --rc genhtml_legend=1 00:28:17.114 --rc geninfo_all_blocks=1 00:28:17.114 --rc geninfo_unexecuted_blocks=1 00:28:17.114 00:28:17.114 ' 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:17.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.114 --rc genhtml_branch_coverage=1 00:28:17.114 --rc genhtml_function_coverage=1 00:28:17.114 --rc genhtml_legend=1 00:28:17.114 --rc geninfo_all_blocks=1 00:28:17.114 --rc geninfo_unexecuted_blocks=1 00:28:17.114 00:28:17.114 ' 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:17.114 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:17.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:17.373 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:28:17.374 00:28:17.374 real 0m0.207s 00:28:17.374 user 0m0.126s 00:28:17.374 sys 0m0.096s 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:28:17.374 ************************************ 00:28:17.374 END TEST dma 00:28:17.374 ************************************ 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.374 ************************************ 00:28:17.374 START TEST nvmf_identify 00:28:17.374 ************************************ 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:17.374 * Looking for test storage... 00:28:17.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:17.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.374 --rc genhtml_branch_coverage=1 00:28:17.374 --rc genhtml_function_coverage=1 00:28:17.374 --rc genhtml_legend=1 00:28:17.374 --rc geninfo_all_blocks=1 00:28:17.374 --rc geninfo_unexecuted_blocks=1 00:28:17.374 00:28:17.374 ' 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:17.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.374 --rc genhtml_branch_coverage=1 00:28:17.374 --rc genhtml_function_coverage=1 00:28:17.374 --rc genhtml_legend=1 00:28:17.374 --rc geninfo_all_blocks=1 00:28:17.374 --rc geninfo_unexecuted_blocks=1 00:28:17.374 00:28:17.374 ' 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:17.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.374 --rc genhtml_branch_coverage=1 00:28:17.374 --rc genhtml_function_coverage=1 00:28:17.374 --rc genhtml_legend=1 00:28:17.374 --rc geninfo_all_blocks=1 00:28:17.374 --rc geninfo_unexecuted_blocks=1 00:28:17.374 00:28:17.374 ' 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:17.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.374 --rc genhtml_branch_coverage=1 00:28:17.374 --rc genhtml_function_coverage=1 00:28:17.374 --rc genhtml_legend=1 00:28:17.374 --rc geninfo_all_blocks=1 00:28:17.374 --rc geninfo_unexecuted_blocks=1 00:28:17.374 00:28:17.374 ' 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.374 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.633 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:17.633 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:17.633 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.633 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.633 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:17.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:28:17.634 00:09:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:24.216 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:24.216 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:24.216 Found net devices under 0000:86:00.0: cvl_0_0 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:24.216 Found net devices under 0000:86:00.1: cvl_0_1 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.216 00:09:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.216 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.216 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.216 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:24.216 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:24.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:28:24.217 00:28:24.217 --- 10.0.0.2 ping statistics --- 00:28:24.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.217 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:28:24.217 00:28:24.217 --- 10.0.0.1 ping statistics --- 00:28:24.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.217 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=452195 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 452195 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 452195 ']' 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.217 [2024-12-10 00:09:58.302899] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:24.217 [2024-12-10 00:09:58.302941] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.217 [2024-12-10 00:09:58.382963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:24.217 [2024-12-10 00:09:58.425342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.217 [2024-12-10 00:09:58.425381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.217 [2024-12-10 00:09:58.425389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.217 [2024-12-10 00:09:58.425395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.217 [2024-12-10 00:09:58.425400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.217 [2024-12-10 00:09:58.426798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.217 [2024-12-10 00:09:58.426907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.217 [2024-12-10 00:09:58.427006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:24.217 [2024-12-10 00:09:58.427007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.217 [2024-12-10 00:09:58.536125] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.217 Malloc0 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.217 [2024-12-10 00:09:58.635802] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.217 [ 00:28:24.217 { 00:28:24.217 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:24.217 "subtype": "Discovery", 00:28:24.217 "listen_addresses": [ 00:28:24.217 { 00:28:24.217 "trtype": "TCP", 00:28:24.217 "adrfam": "IPv4", 00:28:24.217 "traddr": "10.0.0.2", 00:28:24.217 "trsvcid": "4420" 00:28:24.217 } 00:28:24.217 ], 00:28:24.217 "allow_any_host": true, 00:28:24.217 "hosts": [] 00:28:24.217 }, 00:28:24.217 { 00:28:24.217 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:24.217 "subtype": "NVMe", 00:28:24.217 "listen_addresses": [ 00:28:24.217 { 00:28:24.217 "trtype": "TCP", 00:28:24.217 "adrfam": "IPv4", 00:28:24.217 "traddr": "10.0.0.2", 00:28:24.217 "trsvcid": "4420" 00:28:24.217 } 00:28:24.217 ], 00:28:24.217 "allow_any_host": true, 00:28:24.217 "hosts": [], 00:28:24.217 "serial_number": "SPDK00000000000001", 00:28:24.217 "model_number": "SPDK bdev Controller", 00:28:24.217 "max_namespaces": 32, 00:28:24.217 "min_cntlid": 1, 00:28:24.217 "max_cntlid": 65519, 00:28:24.217 "namespaces": [ 00:28:24.217 { 00:28:24.217 "nsid": 1, 00:28:24.217 "bdev_name": "Malloc0", 00:28:24.217 "name": "Malloc0", 00:28:24.217 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:24.217 "eui64": "ABCDEF0123456789", 00:28:24.217 "uuid": "9cc4201a-6857-4a76-8065-9f04b960a07e" 00:28:24.217 } 00:28:24.217 ] 00:28:24.217 } 00:28:24.217 ] 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.217 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:24.217 [2024-12-10 00:09:58.686895] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:24.217 [2024-12-10 00:09:58.686934] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452329 ] 00:28:24.217 [2024-12-10 00:09:58.732307] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:28:24.217 [2024-12-10 00:09:58.732358] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:24.217 [2024-12-10 00:09:58.732363] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:24.217 [2024-12-10 00:09:58.732377] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:24.217 [2024-12-10 00:09:58.732386] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:24.217 [2024-12-10 00:09:58.732918] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:28:24.218 [2024-12-10 00:09:58.732953] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2071690 0 00:28:24.218 [2024-12-10 00:09:58.739172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:24.218 [2024-12-10 00:09:58.739187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:24.218 [2024-12-10 00:09:58.739192] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:24.218 [2024-12-10 00:09:58.739195] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:24.218 [2024-12-10 00:09:58.739230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.739236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.739240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2071690) 00:28:24.218 [2024-12-10 00:09:58.739254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:24.218 [2024-12-10 00:09:58.739272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3100, cid 0, qid 0 00:28:24.218 [2024-12-10 00:09:58.747168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.218 [2024-12-10 00:09:58.747177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.218 [2024-12-10 00:09:58.747181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3100) on tqpair=0x2071690 00:28:24.218 [2024-12-10 00:09:58.747197] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:24.218 [2024-12-10 00:09:58.747204] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:28:24.218 [2024-12-10 00:09:58.747209] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:28:24.218 [2024-12-10 00:09:58.747224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2071690) 00:28:24.218 [2024-12-10 00:09:58.747238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.218 [2024-12-10 00:09:58.747251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3100, cid 0, qid 0 00:28:24.218 [2024-12-10 00:09:58.747408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.218 [2024-12-10 00:09:58.747413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.218 [2024-12-10 00:09:58.747417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3100) on tqpair=0x2071690 00:28:24.218 [2024-12-10 00:09:58.747427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:28:24.218 [2024-12-10 00:09:58.747434] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:28:24.218 [2024-12-10 00:09:58.747440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2071690) 00:28:24.218 [2024-12-10 00:09:58.747456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.218 [2024-12-10 00:09:58.747466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3100, cid 0, qid 0 00:28:24.218 [2024-12-10 00:09:58.747554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.218 [2024-12-10 00:09:58.747560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.218 [2024-12-10 00:09:58.747563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3100) on tqpair=0x2071690 00:28:24.218 [2024-12-10 00:09:58.747571] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:28:24.218 [2024-12-10 00:09:58.747578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:24.218 [2024-12-10 00:09:58.747584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2071690) 00:28:24.218 [2024-12-10 00:09:58.747597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.218 [2024-12-10 00:09:58.747606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3100, cid 0, qid 0 00:28:24.218 [2024-12-10 00:09:58.747665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.218 [2024-12-10 00:09:58.747671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.218 [2024-12-10 00:09:58.747674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3100) on tqpair=0x2071690 00:28:24.218 [2024-12-10 00:09:58.747681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:24.218 [2024-12-10 00:09:58.747689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2071690) 00:28:24.218 [2024-12-10 00:09:58.747702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.218 [2024-12-10 00:09:58.747711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3100, cid 0, qid 0 00:28:24.218 [2024-12-10 00:09:58.747805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.218 [2024-12-10 00:09:58.747811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.218 [2024-12-10 00:09:58.747814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3100) on tqpair=0x2071690 00:28:24.218 [2024-12-10 00:09:58.747822] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:24.218 [2024-12-10 00:09:58.747826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:24.218 [2024-12-10 00:09:58.747833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:24.218 [2024-12-10 00:09:58.747940] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:28:24.218 [2024-12-10 00:09:58.747948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:24.218 [2024-12-10 00:09:58.747957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.747964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2071690) 00:28:24.218 [2024-12-10 00:09:58.747969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.218 [2024-12-10 00:09:58.747979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3100, cid 0, qid 0 00:28:24.218 [2024-12-10 00:09:58.748094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.218 [2024-12-10 00:09:58.748100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.218 [2024-12-10 00:09:58.748104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3100) on tqpair=0x2071690 00:28:24.218 [2024-12-10 00:09:58.748111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:24.218 [2024-12-10 00:09:58.748119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2071690) 00:28:24.218 [2024-12-10 00:09:58.748131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.218 [2024-12-10 00:09:58.748140] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3100, cid 0, qid 0 00:28:24.218 [2024-12-10 00:09:58.748245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.218 [2024-12-10 00:09:58.748252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.218 [2024-12-10 00:09:58.748255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3100) on tqpair=0x2071690 00:28:24.218 [2024-12-10 00:09:58.748262] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:24.218 [2024-12-10 00:09:58.748267] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:24.218 [2024-12-10 00:09:58.748273] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:28:24.218 [2024-12-10 00:09:58.748285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:24.218 [2024-12-10 00:09:58.748294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748297] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2071690) 00:28:24.218 [2024-12-10 00:09:58.748303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.218 [2024-12-10 00:09:58.748313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3100, cid 0, qid 0 00:28:24.218 [2024-12-10 00:09:58.748417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.218 [2024-12-10 00:09:58.748422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.218 [2024-12-10 00:09:58.748425] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748429] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2071690): datao=0, datal=4096, cccid=0 00:28:24.218 [2024-12-10 00:09:58.748433] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d3100) on tqpair(0x2071690): expected_datao=0, payload_size=4096 00:28:24.218 [2024-12-10 00:09:58.748439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748465] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748470] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.218 [2024-12-10 00:09:58.748555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.218 [2024-12-10 00:09:58.748558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3100) on tqpair=0x2071690 00:28:24.218 [2024-12-10 00:09:58.748568] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:28:24.218 [2024-12-10 00:09:58.748572] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:28:24.218 [2024-12-10 00:09:58.748576] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:28:24.218 [2024-12-10 00:09:58.748581] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:28:24.218 [2024-12-10 00:09:58.748585] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:28:24.218 [2024-12-10 00:09:58.748589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:28:24.218 [2024-12-10 00:09:58.748597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:24.218 [2024-12-10 00:09:58.748603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2071690) 00:28:24.218 [2024-12-10 00:09:58.748615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:24.218 [2024-12-10 00:09:58.748626] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3100, cid 0, qid 0 00:28:24.218 [2024-12-10 00:09:58.748692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.218 [2024-12-10 00:09:58.748698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.218 [2024-12-10 00:09:58.748701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3100) on tqpair=0x2071690 00:28:24.218 [2024-12-10 00:09:58.748712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2071690) 00:28:24.218 [2024-12-10 00:09:58.748723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.218 [2024-12-10 00:09:58.748728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.218 [2024-12-10 00:09:58.748735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2071690) 00:28:24.218 [2024-12-10 00:09:58.748740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.218 [2024-12-10 00:09:58.748745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.748748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.748751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2071690) 00:28:24.219 [2024-12-10 00:09:58.748758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.219 [2024-12-10 00:09:58.748763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.748766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.748769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2071690) 00:28:24.219 [2024-12-10 00:09:58.748774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.219 [2024-12-10 00:09:58.748778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:24.219 [2024-12-10 00:09:58.748789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:24.219 [2024-12-10 00:09:58.748795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.748798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2071690) 00:28:24.219 [2024-12-10 00:09:58.748804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.219 [2024-12-10 00:09:58.748815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3100, cid 0, qid 0 00:28:24.219 [2024-12-10 00:09:58.748819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3280, cid 1, qid 0 00:28:24.219 [2024-12-10 00:09:58.748823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3400, cid 2, qid 0 00:28:24.219 [2024-12-10 00:09:58.748827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3580, cid 3, qid 0 00:28:24.219 [2024-12-10 00:09:58.748831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3700, cid 4, qid 0 00:28:24.219 [2024-12-10 00:09:58.748947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.219 [2024-12-10 00:09:58.748953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.219 [2024-12-10 00:09:58.748955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.748959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3700) on tqpair=0x2071690 00:28:24.219 [2024-12-10 00:09:58.748963] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:28:24.219 [2024-12-10 00:09:58.748968] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:28:24.219 [2024-12-10 00:09:58.748977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.748981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2071690) 00:28:24.219 [2024-12-10 00:09:58.748986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.219 [2024-12-10 00:09:58.748996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3700, cid 4, qid 0 00:28:24.219 [2024-12-10 00:09:58.749074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.219 [2024-12-10 00:09:58.749080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.219 [2024-12-10 00:09:58.749083] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749086] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2071690): datao=0, datal=4096, cccid=4 00:28:24.219 [2024-12-10 00:09:58.749090] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d3700) on tqpair(0x2071690): expected_datao=0, payload_size=4096 00:28:24.219 [2024-12-10 00:09:58.749094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749099] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749105] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.219 [2024-12-10 00:09:58.749152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.219 [2024-12-10 00:09:58.749155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3700) on tqpair=0x2071690 00:28:24.219 [2024-12-10 00:09:58.749177] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:28:24.219 [2024-12-10 00:09:58.749199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2071690) 00:28:24.219 [2024-12-10 00:09:58.749208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.219 [2024-12-10 00:09:58.749214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2071690) 00:28:24.219 [2024-12-10 00:09:58.749226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.219 [2024-12-10 00:09:58.749239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3700, cid 4, qid 0 00:28:24.219 [2024-12-10 00:09:58.749244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3880, cid 5, qid 0 00:28:24.219 [2024-12-10 00:09:58.749360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.219 [2024-12-10 00:09:58.749366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.219 [2024-12-10 00:09:58.749369] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749372] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2071690): datao=0, datal=1024, cccid=4 00:28:24.219 [2024-12-10 00:09:58.749376] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d3700) on tqpair(0x2071690): expected_datao=0, payload_size=1024 00:28:24.219 [2024-12-10 00:09:58.749380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749385] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749389] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.219 [2024-12-10 00:09:58.749398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.219 [2024-12-10 00:09:58.749401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.749405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3880) on tqpair=0x2071690 00:28:24.219 [2024-12-10 00:09:58.792170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.219 [2024-12-10 00:09:58.792181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.219 [2024-12-10 00:09:58.792184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.792188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3700) on tqpair=0x2071690 00:28:24.219 [2024-12-10 00:09:58.792198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.792202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2071690) 00:28:24.219 [2024-12-10 00:09:58.792209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.219 [2024-12-10 00:09:58.792225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3700, cid 4, qid 0 00:28:24.219 [2024-12-10 00:09:58.792315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.219 [2024-12-10 00:09:58.792324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.219 [2024-12-10 00:09:58.792327] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.792331] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2071690): datao=0, datal=3072, cccid=4 00:28:24.219 [2024-12-10 00:09:58.792335] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d3700) on tqpair(0x2071690): expected_datao=0, payload_size=3072 00:28:24.219 [2024-12-10 00:09:58.792339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.792350] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.792354] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.792426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.219 [2024-12-10 00:09:58.792432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.219 [2024-12-10 00:09:58.792435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.792438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3700) on tqpair=0x2071690 00:28:24.219 [2024-12-10 00:09:58.792446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.792450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2071690) 00:28:24.219 [2024-12-10 00:09:58.792455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.219 [2024-12-10 00:09:58.792469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3700, cid 4, qid 0 00:28:24.219 [2024-12-10 00:09:58.792540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.219 [2024-12-10 00:09:58.792546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.219 [2024-12-10 00:09:58.792549] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.792552] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2071690): datao=0, datal=8, cccid=4 00:28:24.219 [2024-12-10 00:09:58.792556] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d3700) on tqpair(0x2071690): expected_datao=0, payload_size=8 00:28:24.219 [2024-12-10 00:09:58.792560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.792565] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.792568] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.834353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.219 [2024-12-10 00:09:58.834364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.219 [2024-12-10 00:09:58.834368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.219 [2024-12-10 00:09:58.834371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3700) on tqpair=0x2071690 00:28:24.219 ===================================================== 00:28:24.219 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:24.219 ===================================================== 00:28:24.219 Controller Capabilities/Features 00:28:24.219 ================================ 00:28:24.219 Vendor ID: 0000 00:28:24.219 Subsystem Vendor ID: 0000 00:28:24.219 Serial Number: .................... 00:28:24.219 Model Number: ........................................ 00:28:24.219 Firmware Version: 25.01 00:28:24.219 Recommended Arb Burst: 0 00:28:24.219 IEEE OUI Identifier: 00 00 00 00:28:24.219 Multi-path I/O 00:28:24.219 May have multiple subsystem ports: No 00:28:24.219 May have multiple controllers: No 00:28:24.219 Associated with SR-IOV VF: No 00:28:24.219 Max Data Transfer Size: 131072 00:28:24.219 Max Number of Namespaces: 0 00:28:24.219 Max Number of I/O Queues: 1024 00:28:24.219 NVMe Specification Version (VS): 1.3 00:28:24.219 NVMe Specification Version (Identify): 1.3 00:28:24.219 Maximum Queue Entries: 128 00:28:24.219 Contiguous Queues Required: Yes 00:28:24.219 Arbitration Mechanisms Supported 00:28:24.219 Weighted Round Robin: Not Supported 00:28:24.219 Vendor Specific: Not Supported 00:28:24.219 Reset Timeout: 15000 ms 00:28:24.219 Doorbell Stride: 4 bytes 00:28:24.219 NVM Subsystem Reset: Not Supported 00:28:24.219 Command Sets Supported 00:28:24.219 NVM Command Set: Supported 00:28:24.219 Boot Partition: Not Supported 00:28:24.219 Memory Page Size Minimum: 4096 bytes 00:28:24.219 Memory Page Size Maximum: 4096 bytes 00:28:24.219 Persistent Memory Region: Not Supported 00:28:24.219 Optional Asynchronous Events Supported 00:28:24.219 Namespace Attribute Notices: Not Supported 00:28:24.219 Firmware Activation Notices: Not Supported 00:28:24.219 ANA Change Notices: Not Supported 00:28:24.219 PLE Aggregate Log Change Notices: Not Supported 00:28:24.219 LBA Status Info Alert Notices: Not Supported 00:28:24.219 EGE Aggregate Log Change Notices: Not Supported 00:28:24.219 Normal NVM Subsystem Shutdown event: Not Supported 00:28:24.219 Zone Descriptor Change Notices: Not Supported 00:28:24.219 Discovery Log Change Notices: Supported 00:28:24.219 Controller Attributes 00:28:24.219 128-bit Host Identifier: Not Supported 00:28:24.219 Non-Operational Permissive Mode: Not Supported 00:28:24.219 NVM Sets: Not Supported 00:28:24.219 Read Recovery Levels: Not Supported 00:28:24.219 Endurance Groups: Not Supported 00:28:24.219 Predictable Latency Mode: Not Supported 00:28:24.219 Traffic Based Keep ALive: Not Supported 00:28:24.219 Namespace Granularity: Not Supported 00:28:24.219 SQ Associations: Not Supported 00:28:24.219 UUID List: Not Supported 00:28:24.219 Multi-Domain Subsystem: Not Supported 00:28:24.219 Fixed Capacity Management: Not Supported 00:28:24.219 Variable Capacity Management: Not Supported 00:28:24.219 Delete Endurance Group: Not Supported 00:28:24.219 Delete NVM Set: Not Supported 00:28:24.219 Extended LBA Formats Supported: Not Supported 00:28:24.219 Flexible Data Placement Supported: Not Supported 00:28:24.219 00:28:24.219 Controller Memory Buffer Support 00:28:24.219 ================================ 00:28:24.219 Supported: No 00:28:24.219 00:28:24.219 Persistent Memory Region Support 00:28:24.219 ================================ 00:28:24.219 Supported: No 00:28:24.219 00:28:24.219 Admin Command Set Attributes 00:28:24.219 ============================ 00:28:24.219 Security Send/Receive: Not Supported 00:28:24.219 Format NVM: Not Supported 00:28:24.219 Firmware Activate/Download: Not Supported 00:28:24.219 Namespace Management: Not Supported 00:28:24.219 Device Self-Test: Not Supported 00:28:24.219 Directives: Not Supported 00:28:24.219 NVMe-MI: Not Supported 00:28:24.219 Virtualization Management: Not Supported 00:28:24.219 Doorbell Buffer Config: Not Supported 00:28:24.219 Get LBA Status Capability: Not Supported 00:28:24.219 Command & Feature Lockdown Capability: Not Supported 00:28:24.219 Abort Command Limit: 1 00:28:24.219 Async Event Request Limit: 4 00:28:24.219 Number of Firmware Slots: N/A 00:28:24.219 Firmware Slot 1 Read-Only: N/A 00:28:24.219 Firmware Activation Without Reset: N/A 00:28:24.219 Multiple Update Detection Support: N/A 00:28:24.219 Firmware Update Granularity: No Information Provided 00:28:24.219 Per-Namespace SMART Log: No 00:28:24.219 Asymmetric Namespace Access Log Page: Not Supported 00:28:24.219 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:24.219 Command Effects Log Page: Not Supported 00:28:24.219 Get Log Page Extended Data: Supported 00:28:24.219 Telemetry Log Pages: Not Supported 00:28:24.219 Persistent Event Log Pages: Not Supported 00:28:24.219 Supported Log Pages Log Page: May Support 00:28:24.220 Commands Supported & Effects Log Page: Not Supported 00:28:24.220 Feature Identifiers & Effects Log Page:May Support 00:28:24.220 NVMe-MI Commands & Effects Log Page: May Support 00:28:24.220 Data Area 4 for Telemetry Log: Not Supported 00:28:24.220 Error Log Page Entries Supported: 128 00:28:24.220 Keep Alive: Not Supported 00:28:24.220 00:28:24.220 NVM Command Set Attributes 00:28:24.220 ========================== 00:28:24.220 Submission Queue Entry Size 00:28:24.220 Max: 1 00:28:24.220 Min: 1 00:28:24.220 Completion Queue Entry Size 00:28:24.220 Max: 1 00:28:24.220 Min: 1 00:28:24.220 Number of Namespaces: 0 00:28:24.220 Compare Command: Not Supported 00:28:24.220 Write Uncorrectable Command: Not Supported 00:28:24.220 Dataset Management Command: Not Supported 00:28:24.220 Write Zeroes Command: Not Supported 00:28:24.220 Set Features Save Field: Not Supported 00:28:24.220 Reservations: Not Supported 00:28:24.220 Timestamp: Not Supported 00:28:24.220 Copy: Not Supported 00:28:24.220 Volatile Write Cache: Not Present 00:28:24.220 Atomic Write Unit (Normal): 1 00:28:24.220 Atomic Write Unit (PFail): 1 00:28:24.220 Atomic Compare & Write Unit: 1 00:28:24.220 Fused Compare & Write: Supported 00:28:24.220 Scatter-Gather List 00:28:24.220 SGL Command Set: Supported 00:28:24.220 SGL Keyed: Supported 00:28:24.220 SGL Bit Bucket Descriptor: Not Supported 00:28:24.220 SGL Metadata Pointer: Not Supported 00:28:24.220 Oversized SGL: Not Supported 00:28:24.220 SGL Metadata Address: Not Supported 00:28:24.220 SGL Offset: Supported 00:28:24.220 Transport SGL Data Block: Not Supported 00:28:24.220 Replay Protected Memory Block: Not Supported 00:28:24.220 00:28:24.220 Firmware Slot Information 00:28:24.220 ========================= 00:28:24.220 Active slot: 0 00:28:24.220 00:28:24.220 00:28:24.220 Error Log 00:28:24.220 ========= 00:28:24.220 00:28:24.220 Active Namespaces 00:28:24.220 ================= 00:28:24.220 Discovery Log Page 00:28:24.220 ================== 00:28:24.220 Generation Counter: 2 00:28:24.220 Number of Records: 2 00:28:24.220 Record Format: 0 00:28:24.220 00:28:24.220 Discovery Log Entry 0 00:28:24.220 ---------------------- 00:28:24.220 Transport Type: 3 (TCP) 00:28:24.220 Address Family: 1 (IPv4) 00:28:24.220 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:24.220 Entry Flags: 00:28:24.220 Duplicate Returned Information: 1 00:28:24.220 Explicit Persistent Connection Support for Discovery: 1 00:28:24.220 Transport Requirements: 00:28:24.220 Secure Channel: Not Required 00:28:24.220 Port ID: 0 (0x0000) 00:28:24.220 Controller ID: 65535 (0xffff) 00:28:24.220 Admin Max SQ Size: 128 00:28:24.220 Transport Service Identifier: 4420 00:28:24.220 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:24.220 Transport Address: 10.0.0.2 00:28:24.220 Discovery Log Entry 1 00:28:24.220 ---------------------- 00:28:24.220 Transport Type: 3 (TCP) 00:28:24.220 Address Family: 1 (IPv4) 00:28:24.220 Subsystem Type: 2 (NVM Subsystem) 00:28:24.220 Entry Flags: 00:28:24.220 Duplicate Returned Information: 0 00:28:24.220 Explicit Persistent Connection Support for Discovery: 0 00:28:24.220 Transport Requirements: 00:28:24.220 Secure Channel: Not Required 00:28:24.220 Port ID: 0 (0x0000) 00:28:24.220 Controller ID: 65535 (0xffff) 00:28:24.220 Admin Max SQ Size: 128 00:28:24.220 Transport Service Identifier: 4420 00:28:24.220 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:24.220 Transport Address: 10.0.0.2 [2024-12-10 00:09:58.834456] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:28:24.220 [2024-12-10 00:09:58.834467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3100) on tqpair=0x2071690 00:28:24.220 [2024-12-10 00:09:58.834474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.220 [2024-12-10 00:09:58.834479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3280) on tqpair=0x2071690 00:28:24.220 [2024-12-10 00:09:58.834483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.220 [2024-12-10 00:09:58.834487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3400) on tqpair=0x2071690 00:28:24.220 [2024-12-10 00:09:58.834492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.220 [2024-12-10 00:09:58.834496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3580) on tqpair=0x2071690 00:28:24.220 [2024-12-10 00:09:58.834501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.220 [2024-12-10 00:09:58.834510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.834513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.834517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2071690) 00:28:24.220 [2024-12-10 00:09:58.834523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.220 [2024-12-10 00:09:58.834538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3580, cid 3, qid 0 00:28:24.220 [2024-12-10 00:09:58.834604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.220 [2024-12-10 00:09:58.834609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.220 [2024-12-10 00:09:58.834613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.834616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3580) on tqpair=0x2071690 00:28:24.220 [2024-12-10 00:09:58.834623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.834626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.834629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2071690) 00:28:24.220 [2024-12-10 00:09:58.834635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.220 [2024-12-10 00:09:58.834648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3580, cid 3, qid 0 00:28:24.220 [2024-12-10 00:09:58.834753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.220 [2024-12-10 00:09:58.834759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.220 [2024-12-10 00:09:58.834762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.834765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3580) on tqpair=0x2071690 00:28:24.220 [2024-12-10 00:09:58.834770] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:28:24.220 [2024-12-10 00:09:58.834774] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:28:24.220 [2024-12-10 00:09:58.834783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.834787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.834790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2071690) 00:28:24.220 [2024-12-10 00:09:58.834796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.220 [2024-12-10 00:09:58.834806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3580, cid 3, qid 0 00:28:24.220 [2024-12-10 00:09:58.834868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.220 [2024-12-10 00:09:58.834873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.220 [2024-12-10 00:09:58.834876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.834880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3580) on tqpair=0x2071690 00:28:24.220 [2024-12-10 00:09:58.834889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.834892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.834896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2071690) 00:28:24.220 [2024-12-10 00:09:58.834902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.220 [2024-12-10 00:09:58.834911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3580, cid 3, qid 0 00:28:24.220 [2024-12-10 00:09:58.835024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.220 [2024-12-10 00:09:58.835030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.220 [2024-12-10 00:09:58.835033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.835036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3580) on tqpair=0x2071690 00:28:24.220 [2024-12-10 00:09:58.835045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.835049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.835052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2071690) 00:28:24.220 [2024-12-10 00:09:58.835057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.220 [2024-12-10 00:09:58.835067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3580, cid 3, qid 0 00:28:24.220 [2024-12-10 00:09:58.839163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.220 [2024-12-10 00:09:58.839172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.220 [2024-12-10 00:09:58.839175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.839178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3580) on tqpair=0x2071690 00:28:24.220 [2024-12-10 00:09:58.839187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.839191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.839194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2071690) 00:28:24.220 [2024-12-10 00:09:58.839200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.220 [2024-12-10 00:09:58.839210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d3580, cid 3, qid 0 00:28:24.220 [2024-12-10 00:09:58.839396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.220 [2024-12-10 00:09:58.839401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.220 [2024-12-10 00:09:58.839404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.839408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20d3580) on tqpair=0x2071690 00:28:24.220 [2024-12-10 00:09:58.839414] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:28:24.220 00:28:24.220 00:09:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:24.220 [2024-12-10 00:09:58.876066] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:24.220 [2024-12-10 00:09:58.876099] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid452434 ] 00:28:24.220 [2024-12-10 00:09:58.917426] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:28:24.220 [2024-12-10 00:09:58.917469] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:24.220 [2024-12-10 00:09:58.917474] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:24.220 [2024-12-10 00:09:58.917492] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:24.220 [2024-12-10 00:09:58.917500] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:24.220 [2024-12-10 00:09:58.921333] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:28:24.220 [2024-12-10 00:09:58.921365] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b17690 0 00:28:24.220 [2024-12-10 00:09:58.929169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:24.220 [2024-12-10 00:09:58.929182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:24.220 [2024-12-10 00:09:58.929186] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:24.220 [2024-12-10 00:09:58.929189] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:24.220 [2024-12-10 00:09:58.929215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.929221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.220 [2024-12-10 00:09:58.929224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b17690) 00:28:24.221 [2024-12-10 00:09:58.929235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:24.221 [2024-12-10 00:09:58.929252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79100, cid 0, qid 0 00:28:24.221 [2024-12-10 00:09:58.937167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.221 [2024-12-10 00:09:58.937174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.221 [2024-12-10 00:09:58.937178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79100) on tqpair=0x1b17690 00:28:24.221 [2024-12-10 00:09:58.937192] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:24.221 [2024-12-10 00:09:58.937198] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:28:24.221 [2024-12-10 00:09:58.937203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:28:24.221 [2024-12-10 00:09:58.937214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b17690) 00:28:24.221 [2024-12-10 00:09:58.937227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.221 [2024-12-10 00:09:58.937240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79100, cid 0, qid 0 00:28:24.221 [2024-12-10 00:09:58.937403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.221 [2024-12-10 00:09:58.937409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.221 [2024-12-10 00:09:58.937412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79100) on tqpair=0x1b17690 00:28:24.221 [2024-12-10 00:09:58.937422] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:28:24.221 [2024-12-10 00:09:58.937429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:28:24.221 [2024-12-10 00:09:58.937435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b17690) 00:28:24.221 [2024-12-10 00:09:58.937448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.221 [2024-12-10 00:09:58.937458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79100, cid 0, qid 0 00:28:24.221 [2024-12-10 00:09:58.937550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.221 [2024-12-10 00:09:58.937556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.221 [2024-12-10 00:09:58.937562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937565] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79100) on tqpair=0x1b17690 00:28:24.221 [2024-12-10 00:09:58.937570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:28:24.221 [2024-12-10 00:09:58.937576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:28:24.221 [2024-12-10 00:09:58.937582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b17690) 00:28:24.221 [2024-12-10 00:09:58.937594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.221 [2024-12-10 00:09:58.937604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79100, cid 0, qid 0 00:28:24.221 [2024-12-10 00:09:58.937669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.221 [2024-12-10 00:09:58.937675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.221 [2024-12-10 00:09:58.937678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79100) on tqpair=0x1b17690 00:28:24.221 [2024-12-10 00:09:58.937686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:24.221 [2024-12-10 00:09:58.937694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b17690) 00:28:24.221 [2024-12-10 00:09:58.937707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.221 [2024-12-10 00:09:58.937717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79100, cid 0, qid 0 00:28:24.221 [2024-12-10 00:09:58.937801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.221 [2024-12-10 00:09:58.937807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.221 [2024-12-10 00:09:58.937810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79100) on tqpair=0x1b17690 00:28:24.221 [2024-12-10 00:09:58.937817] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:28:24.221 [2024-12-10 00:09:58.937821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:28:24.221 [2024-12-10 00:09:58.937828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:24.221 [2024-12-10 00:09:58.937935] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:28:24.221 [2024-12-10 00:09:58.937940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:24.221 [2024-12-10 00:09:58.937946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.937953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b17690) 00:28:24.221 [2024-12-10 00:09:58.937958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.221 [2024-12-10 00:09:58.937968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79100, cid 0, qid 0 00:28:24.221 [2024-12-10 00:09:58.938037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.221 [2024-12-10 00:09:58.938043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.221 [2024-12-10 00:09:58.938046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.938049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79100) on tqpair=0x1b17690 00:28:24.221 [2024-12-10 00:09:58.938054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:24.221 [2024-12-10 00:09:58.938062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.938065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.938069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b17690) 00:28:24.221 [2024-12-10 00:09:58.938074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.221 [2024-12-10 00:09:58.938084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79100, cid 0, qid 0 00:28:24.221 [2024-12-10 00:09:58.938189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.221 [2024-12-10 00:09:58.938195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.221 [2024-12-10 00:09:58.938199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.938202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79100) on tqpair=0x1b17690 00:28:24.221 [2024-12-10 00:09:58.938206] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:24.221 [2024-12-10 00:09:58.938210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:28:24.221 [2024-12-10 00:09:58.938217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:28:24.221 [2024-12-10 00:09:58.938223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:28:24.221 [2024-12-10 00:09:58.938231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.938234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b17690) 00:28:24.221 [2024-12-10 00:09:58.938240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.221 [2024-12-10 00:09:58.938250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79100, cid 0, qid 0 00:28:24.221 [2024-12-10 00:09:58.938337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.221 [2024-12-10 00:09:58.938342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.221 [2024-12-10 00:09:58.938345] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.938348] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b17690): datao=0, datal=4096, cccid=0 00:28:24.221 [2024-12-10 00:09:58.938353] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b79100) on tqpair(0x1b17690): expected_datao=0, payload_size=4096 00:28:24.221 [2024-12-10 00:09:58.938357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.938372] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.938377] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.938440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.221 [2024-12-10 00:09:58.938446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.221 [2024-12-10 00:09:58.938449] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.938452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79100) on tqpair=0x1b17690 00:28:24.221 [2024-12-10 00:09:58.938461] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:28:24.221 [2024-12-10 00:09:58.938465] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:28:24.221 [2024-12-10 00:09:58.938469] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:28:24.221 [2024-12-10 00:09:58.938473] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:28:24.221 [2024-12-10 00:09:58.938476] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:28:24.221 [2024-12-10 00:09:58.938480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:28:24.221 [2024-12-10 00:09:58.938488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:28:24.221 [2024-12-10 00:09:58.938494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.221 [2024-12-10 00:09:58.938498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b17690) 00:28:24.222 [2024-12-10 00:09:58.938507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:24.222 [2024-12-10 00:09:58.938517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79100, cid 0, qid 0 00:28:24.222 [2024-12-10 00:09:58.938582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.222 [2024-12-10 00:09:58.938587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.222 [2024-12-10 00:09:58.938590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79100) on tqpair=0x1b17690 00:28:24.222 [2024-12-10 00:09:58.938599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b17690) 00:28:24.222 [2024-12-10 00:09:58.938611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.222 [2024-12-10 00:09:58.938616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b17690) 00:28:24.222 [2024-12-10 00:09:58.938627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.222 [2024-12-10 00:09:58.938632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b17690) 00:28:24.222 [2024-12-10 00:09:58.938643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.222 [2024-12-10 00:09:58.938648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.222 [2024-12-10 00:09:58.938660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.222 [2024-12-10 00:09:58.938664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:24.222 [2024-12-10 00:09:58.938675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:24.222 [2024-12-10 00:09:58.938681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b17690) 00:28:24.222 [2024-12-10 00:09:58.938689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-12-10 00:09:58.938700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79100, cid 0, qid 0 00:28:24.222 [2024-12-10 00:09:58.938705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79280, cid 1, qid 0 00:28:24.222 [2024-12-10 00:09:58.938709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79400, cid 2, qid 0 00:28:24.222 [2024-12-10 00:09:58.938714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.222 [2024-12-10 00:09:58.938718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79700, cid 4, qid 0 00:28:24.222 [2024-12-10 00:09:58.938833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.222 [2024-12-10 00:09:58.938839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.222 [2024-12-10 00:09:58.938842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79700) on tqpair=0x1b17690 00:28:24.222 [2024-12-10 00:09:58.938849] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:28:24.222 [2024-12-10 00:09:58.938854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:24.222 [2024-12-10 00:09:58.938862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:28:24.222 [2024-12-10 00:09:58.938868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:24.222 [2024-12-10 00:09:58.938874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b17690) 00:28:24.222 [2024-12-10 00:09:58.938886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:24.222 [2024-12-10 00:09:58.938896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79700, cid 4, qid 0 00:28:24.222 [2024-12-10 00:09:58.938983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.222 [2024-12-10 00:09:58.938989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.222 [2024-12-10 00:09:58.938992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.938995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79700) on tqpair=0x1b17690 00:28:24.222 [2024-12-10 00:09:58.939048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:28:24.222 [2024-12-10 00:09:58.939058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:24.222 [2024-12-10 00:09:58.939065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.939068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b17690) 00:28:24.222 [2024-12-10 00:09:58.939074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-12-10 00:09:58.939086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79700, cid 4, qid 0 00:28:24.222 [2024-12-10 00:09:58.939164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.222 [2024-12-10 00:09:58.939171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.222 [2024-12-10 00:09:58.939174] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.939177] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b17690): datao=0, datal=4096, cccid=4 00:28:24.222 [2024-12-10 00:09:58.939181] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b79700) on tqpair(0x1b17690): expected_datao=0, payload_size=4096 00:28:24.222 [2024-12-10 00:09:58.939185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.939203] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.939207] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.980295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.222 [2024-12-10 00:09:58.980305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.222 [2024-12-10 00:09:58.980308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.980312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79700) on tqpair=0x1b17690 00:28:24.222 [2024-12-10 00:09:58.980324] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:28:24.222 [2024-12-10 00:09:58.980332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:28:24.222 [2024-12-10 00:09:58.980341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:28:24.222 [2024-12-10 00:09:58.980348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.980351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b17690) 00:28:24.222 [2024-12-10 00:09:58.980358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-12-10 00:09:58.980370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79700, cid 4, qid 0 00:28:24.222 [2024-12-10 00:09:58.980479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.222 [2024-12-10 00:09:58.980484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.222 [2024-12-10 00:09:58.980487] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.980491] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b17690): datao=0, datal=4096, cccid=4 00:28:24.222 [2024-12-10 00:09:58.980495] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b79700) on tqpair(0x1b17690): expected_datao=0, payload_size=4096 00:28:24.222 [2024-12-10 00:09:58.980498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.980504] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.980507] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.980521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.222 [2024-12-10 00:09:58.980527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.222 [2024-12-10 00:09:58.980530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.980533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79700) on tqpair=0x1b17690 00:28:24.222 [2024-12-10 00:09:58.980542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:24.222 [2024-12-10 00:09:58.980550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:24.222 [2024-12-10 00:09:58.980557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.222 [2024-12-10 00:09:58.980562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b17690) 00:28:24.222 [2024-12-10 00:09:58.980568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.223 [2024-12-10 00:09:58.980579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79700, cid 4, qid 0 00:28:24.223 [2024-12-10 00:09:58.980657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.223 [2024-12-10 00:09:58.980663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.223 [2024-12-10 00:09:58.980666] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:58.980669] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b17690): datao=0, datal=4096, cccid=4 00:28:24.223 [2024-12-10 00:09:58.980673] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b79700) on tqpair(0x1b17690): expected_datao=0, payload_size=4096 00:28:24.223 [2024-12-10 00:09:58.980677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:58.980687] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:58.980690] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.223 [2024-12-10 00:09:59.025178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.223 [2024-12-10 00:09:59.025182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79700) on tqpair=0x1b17690 00:28:24.223 [2024-12-10 00:09:59.025197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:24.223 [2024-12-10 00:09:59.025206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:28:24.223 [2024-12-10 00:09:59.025213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:28:24.223 [2024-12-10 00:09:59.025219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:24.223 [2024-12-10 00:09:59.025224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:24.223 [2024-12-10 00:09:59.025228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:28:24.223 [2024-12-10 00:09:59.025234] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:28:24.223 [2024-12-10 00:09:59.025238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:28:24.223 [2024-12-10 00:09:59.025243] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:28:24.223 [2024-12-10 00:09:59.025256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b17690) 00:28:24.223 [2024-12-10 00:09:59.025267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.223 [2024-12-10 00:09:59.025273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b17690) 00:28:24.223 [2024-12-10 00:09:59.025284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:24.223 [2024-12-10 00:09:59.025301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79700, cid 4, qid 0 00:28:24.223 [2024-12-10 00:09:59.025306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79880, cid 5, qid 0 00:28:24.223 [2024-12-10 00:09:59.025402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.223 [2024-12-10 00:09:59.025407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.223 [2024-12-10 00:09:59.025411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79700) on tqpair=0x1b17690 00:28:24.223 [2024-12-10 00:09:59.025419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.223 [2024-12-10 00:09:59.025424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.223 [2024-12-10 00:09:59.025427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79880) on tqpair=0x1b17690 00:28:24.223 [2024-12-10 00:09:59.025439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b17690) 00:28:24.223 [2024-12-10 00:09:59.025449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.223 [2024-12-10 00:09:59.025459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79880, cid 5, qid 0 00:28:24.223 [2024-12-10 00:09:59.025533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.223 [2024-12-10 00:09:59.025539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.223 [2024-12-10 00:09:59.025542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79880) on tqpair=0x1b17690 00:28:24.223 [2024-12-10 00:09:59.025553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b17690) 00:28:24.223 [2024-12-10 00:09:59.025562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.223 [2024-12-10 00:09:59.025571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79880, cid 5, qid 0 00:28:24.223 [2024-12-10 00:09:59.025638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.223 [2024-12-10 00:09:59.025644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.223 [2024-12-10 00:09:59.025647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79880) on tqpair=0x1b17690 00:28:24.223 [2024-12-10 00:09:59.025658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b17690) 00:28:24.223 [2024-12-10 00:09:59.025667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.223 [2024-12-10 00:09:59.025676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79880, cid 5, qid 0 00:28:24.223 [2024-12-10 00:09:59.025737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.223 [2024-12-10 00:09:59.025742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.223 [2024-12-10 00:09:59.025745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79880) on tqpair=0x1b17690 00:28:24.223 [2024-12-10 00:09:59.025763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b17690) 00:28:24.223 [2024-12-10 00:09:59.025773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.223 [2024-12-10 00:09:59.025781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b17690) 00:28:24.223 [2024-12-10 00:09:59.025790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.223 [2024-12-10 00:09:59.025796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b17690) 00:28:24.223 [2024-12-10 00:09:59.025805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.223 [2024-12-10 00:09:59.025811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b17690) 00:28:24.223 [2024-12-10 00:09:59.025820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.223 [2024-12-10 00:09:59.025830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79880, cid 5, qid 0 00:28:24.223 [2024-12-10 00:09:59.025835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79700, cid 4, qid 0 00:28:24.223 [2024-12-10 00:09:59.025839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79a00, cid 6, qid 0 00:28:24.223 [2024-12-10 00:09:59.025843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79b80, cid 7, qid 0 00:28:24.223 [2024-12-10 00:09:59.025982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.223 [2024-12-10 00:09:59.025988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.223 [2024-12-10 00:09:59.025991] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.025994] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b17690): datao=0, datal=8192, cccid=5 00:28:24.223 [2024-12-10 00:09:59.025998] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b79880) on tqpair(0x1b17690): expected_datao=0, payload_size=8192 00:28:24.223 [2024-12-10 00:09:59.026002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.026017] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.026020] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.223 [2024-12-10 00:09:59.026029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.223 [2024-12-10 00:09:59.026034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.223 [2024-12-10 00:09:59.026036] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026040] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b17690): datao=0, datal=512, cccid=4 00:28:24.224 [2024-12-10 00:09:59.026044] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b79700) on tqpair(0x1b17690): expected_datao=0, payload_size=512 00:28:24.224 [2024-12-10 00:09:59.026048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026053] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026056] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.224 [2024-12-10 00:09:59.026066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.224 [2024-12-10 00:09:59.026069] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026072] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b17690): datao=0, datal=512, cccid=6 00:28:24.224 [2024-12-10 00:09:59.026076] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b79a00) on tqpair(0x1b17690): expected_datao=0, payload_size=512 00:28:24.224 [2024-12-10 00:09:59.026081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026086] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026090] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:24.224 [2024-12-10 00:09:59.026099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:24.224 [2024-12-10 00:09:59.026102] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026105] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b17690): datao=0, datal=4096, cccid=7 00:28:24.224 [2024-12-10 00:09:59.026109] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b79b80) on tqpair(0x1b17690): expected_datao=0, payload_size=4096 00:28:24.224 [2024-12-10 00:09:59.026113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026118] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026121] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.224 [2024-12-10 00:09:59.026134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.224 [2024-12-10 00:09:59.026137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79880) on tqpair=0x1b17690 00:28:24.224 [2024-12-10 00:09:59.026150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.224 [2024-12-10 00:09:59.026155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.224 [2024-12-10 00:09:59.026164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79700) on tqpair=0x1b17690 00:28:24.224 [2024-12-10 00:09:59.026175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.224 [2024-12-10 00:09:59.026181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.224 [2024-12-10 00:09:59.026184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79a00) on tqpair=0x1b17690 00:28:24.224 [2024-12-10 00:09:59.026193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.224 [2024-12-10 00:09:59.026198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.224 [2024-12-10 00:09:59.026201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.224 [2024-12-10 00:09:59.026204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79b80) on tqpair=0x1b17690 00:28:24.224 ===================================================== 00:28:24.224 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.224 ===================================================== 00:28:24.224 Controller Capabilities/Features 00:28:24.224 ================================ 00:28:24.224 Vendor ID: 8086 00:28:24.224 Subsystem Vendor ID: 8086 00:28:24.224 Serial Number: SPDK00000000000001 00:28:24.224 Model Number: SPDK bdev Controller 00:28:24.224 Firmware Version: 25.01 00:28:24.224 Recommended Arb Burst: 6 00:28:24.224 IEEE OUI Identifier: e4 d2 5c 00:28:24.224 Multi-path I/O 00:28:24.224 May have multiple subsystem ports: Yes 00:28:24.224 May have multiple controllers: Yes 00:28:24.224 Associated with SR-IOV VF: No 00:28:24.224 Max Data Transfer Size: 131072 00:28:24.224 Max Number of Namespaces: 32 00:28:24.224 Max Number of I/O Queues: 127 00:28:24.224 NVMe Specification Version (VS): 1.3 00:28:24.224 NVMe Specification Version (Identify): 1.3 00:28:24.224 Maximum Queue Entries: 128 00:28:24.224 Contiguous Queues Required: Yes 00:28:24.224 Arbitration Mechanisms Supported 00:28:24.224 Weighted Round Robin: Not Supported 00:28:24.224 Vendor Specific: Not Supported 00:28:24.224 Reset Timeout: 15000 ms 00:28:24.224 Doorbell Stride: 4 bytes 00:28:24.224 NVM Subsystem Reset: Not Supported 00:28:24.224 Command Sets Supported 00:28:24.224 NVM Command Set: Supported 00:28:24.224 Boot Partition: Not Supported 00:28:24.224 Memory Page Size Minimum: 4096 bytes 00:28:24.224 Memory Page Size Maximum: 4096 bytes 00:28:24.224 Persistent Memory Region: Not Supported 00:28:24.224 Optional Asynchronous Events Supported 00:28:24.224 Namespace Attribute Notices: Supported 00:28:24.224 Firmware Activation Notices: Not Supported 00:28:24.224 ANA Change Notices: Not Supported 00:28:24.224 PLE Aggregate Log Change Notices: Not Supported 00:28:24.224 LBA Status Info Alert Notices: Not Supported 00:28:24.224 EGE Aggregate Log Change Notices: Not Supported 00:28:24.224 Normal NVM Subsystem Shutdown event: Not Supported 00:28:24.224 Zone Descriptor Change Notices: Not Supported 00:28:24.224 Discovery Log Change Notices: Not Supported 00:28:24.224 Controller Attributes 00:28:24.224 128-bit Host Identifier: Supported 00:28:24.224 Non-Operational Permissive Mode: Not Supported 00:28:24.224 NVM Sets: Not Supported 00:28:24.224 Read Recovery Levels: Not Supported 00:28:24.224 Endurance Groups: Not Supported 00:28:24.224 Predictable Latency Mode: Not Supported 00:28:24.224 Traffic Based Keep ALive: Not Supported 00:28:24.224 Namespace Granularity: Not Supported 00:28:24.224 SQ Associations: Not Supported 00:28:24.224 UUID List: Not Supported 00:28:24.224 Multi-Domain Subsystem: Not Supported 00:28:24.224 Fixed Capacity Management: Not Supported 00:28:24.224 Variable Capacity Management: Not Supported 00:28:24.224 Delete Endurance Group: Not Supported 00:28:24.224 Delete NVM Set: Not Supported 00:28:24.224 Extended LBA Formats Supported: Not Supported 00:28:24.224 Flexible Data Placement Supported: Not Supported 00:28:24.224 00:28:24.224 Controller Memory Buffer Support 00:28:24.224 ================================ 00:28:24.224 Supported: No 00:28:24.224 00:28:24.224 Persistent Memory Region Support 00:28:24.224 ================================ 00:28:24.224 Supported: No 00:28:24.224 00:28:24.224 Admin Command Set Attributes 00:28:24.224 ============================ 00:28:24.224 Security Send/Receive: Not Supported 00:28:24.224 Format NVM: Not Supported 00:28:24.224 Firmware Activate/Download: Not Supported 00:28:24.225 Namespace Management: Not Supported 00:28:24.225 Device Self-Test: Not Supported 00:28:24.225 Directives: Not Supported 00:28:24.225 NVMe-MI: Not Supported 00:28:24.225 Virtualization Management: Not Supported 00:28:24.225 Doorbell Buffer Config: Not Supported 00:28:24.225 Get LBA Status Capability: Not Supported 00:28:24.225 Command & Feature Lockdown Capability: Not Supported 00:28:24.225 Abort Command Limit: 4 00:28:24.225 Async Event Request Limit: 4 00:28:24.225 Number of Firmware Slots: N/A 00:28:24.225 Firmware Slot 1 Read-Only: N/A 00:28:24.225 Firmware Activation Without Reset: N/A 00:28:24.225 Multiple Update Detection Support: N/A 00:28:24.225 Firmware Update Granularity: No Information Provided 00:28:24.225 Per-Namespace SMART Log: No 00:28:24.225 Asymmetric Namespace Access Log Page: Not Supported 00:28:24.225 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:24.225 Command Effects Log Page: Supported 00:28:24.225 Get Log Page Extended Data: Supported 00:28:24.225 Telemetry Log Pages: Not Supported 00:28:24.225 Persistent Event Log Pages: Not Supported 00:28:24.225 Supported Log Pages Log Page: May Support 00:28:24.225 Commands Supported & Effects Log Page: Not Supported 00:28:24.225 Feature Identifiers & Effects Log Page:May Support 00:28:24.225 NVMe-MI Commands & Effects Log Page: May Support 00:28:24.225 Data Area 4 for Telemetry Log: Not Supported 00:28:24.225 Error Log Page Entries Supported: 128 00:28:24.225 Keep Alive: Supported 00:28:24.225 Keep Alive Granularity: 10000 ms 00:28:24.225 00:28:24.225 NVM Command Set Attributes 00:28:24.225 ========================== 00:28:24.225 Submission Queue Entry Size 00:28:24.225 Max: 64 00:28:24.225 Min: 64 00:28:24.225 Completion Queue Entry Size 00:28:24.225 Max: 16 00:28:24.225 Min: 16 00:28:24.225 Number of Namespaces: 32 00:28:24.225 Compare Command: Supported 00:28:24.225 Write Uncorrectable Command: Not Supported 00:28:24.225 Dataset Management Command: Supported 00:28:24.225 Write Zeroes Command: Supported 00:28:24.225 Set Features Save Field: Not Supported 00:28:24.225 Reservations: Supported 00:28:24.225 Timestamp: Not Supported 00:28:24.225 Copy: Supported 00:28:24.225 Volatile Write Cache: Present 00:28:24.225 Atomic Write Unit (Normal): 1 00:28:24.225 Atomic Write Unit (PFail): 1 00:28:24.225 Atomic Compare & Write Unit: 1 00:28:24.225 Fused Compare & Write: Supported 00:28:24.225 Scatter-Gather List 00:28:24.225 SGL Command Set: Supported 00:28:24.225 SGL Keyed: Supported 00:28:24.225 SGL Bit Bucket Descriptor: Not Supported 00:28:24.225 SGL Metadata Pointer: Not Supported 00:28:24.225 Oversized SGL: Not Supported 00:28:24.225 SGL Metadata Address: Not Supported 00:28:24.225 SGL Offset: Supported 00:28:24.225 Transport SGL Data Block: Not Supported 00:28:24.225 Replay Protected Memory Block: Not Supported 00:28:24.225 00:28:24.225 Firmware Slot Information 00:28:24.225 ========================= 00:28:24.225 Active slot: 1 00:28:24.225 Slot 1 Firmware Revision: 25.01 00:28:24.225 00:28:24.225 00:28:24.225 Commands Supported and Effects 00:28:24.225 ============================== 00:28:24.225 Admin Commands 00:28:24.225 -------------- 00:28:24.225 Get Log Page (02h): Supported 00:28:24.225 Identify (06h): Supported 00:28:24.225 Abort (08h): Supported 00:28:24.225 Set Features (09h): Supported 00:28:24.225 Get Features (0Ah): Supported 00:28:24.225 Asynchronous Event Request (0Ch): Supported 00:28:24.225 Keep Alive (18h): Supported 00:28:24.225 I/O Commands 00:28:24.225 ------------ 00:28:24.225 Flush (00h): Supported LBA-Change 00:28:24.225 Write (01h): Supported LBA-Change 00:28:24.225 Read (02h): Supported 00:28:24.225 Compare (05h): Supported 00:28:24.225 Write Zeroes (08h): Supported LBA-Change 00:28:24.225 Dataset Management (09h): Supported LBA-Change 00:28:24.225 Copy (19h): Supported LBA-Change 00:28:24.225 00:28:24.225 Error Log 00:28:24.225 ========= 00:28:24.225 00:28:24.225 Arbitration 00:28:24.225 =========== 00:28:24.225 Arbitration Burst: 1 00:28:24.225 00:28:24.225 Power Management 00:28:24.225 ================ 00:28:24.225 Number of Power States: 1 00:28:24.225 Current Power State: Power State #0 00:28:24.225 Power State #0: 00:28:24.225 Max Power: 0.00 W 00:28:24.225 Non-Operational State: Operational 00:28:24.225 Entry Latency: Not Reported 00:28:24.225 Exit Latency: Not Reported 00:28:24.225 Relative Read Throughput: 0 00:28:24.225 Relative Read Latency: 0 00:28:24.225 Relative Write Throughput: 0 00:28:24.225 Relative Write Latency: 0 00:28:24.225 Idle Power: Not Reported 00:28:24.225 Active Power: Not Reported 00:28:24.225 Non-Operational Permissive Mode: Not Supported 00:28:24.225 00:28:24.225 Health Information 00:28:24.225 ================== 00:28:24.225 Critical Warnings: 00:28:24.225 Available Spare Space: OK 00:28:24.225 Temperature: OK 00:28:24.225 Device Reliability: OK 00:28:24.225 Read Only: No 00:28:24.225 Volatile Memory Backup: OK 00:28:24.225 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:24.225 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:24.225 Available Spare: 0% 00:28:24.225 Available Spare Threshold: 0% 00:28:24.225 Life Percentage Used:[2024-12-10 00:09:59.026285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.225 [2024-12-10 00:09:59.026289] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b17690) 00:28:24.225 [2024-12-10 00:09:59.026295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.225 [2024-12-10 00:09:59.026306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79b80, cid 7, qid 0 00:28:24.225 [2024-12-10 00:09:59.026387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.225 [2024-12-10 00:09:59.026393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.225 [2024-12-10 00:09:59.026395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79b80) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.026428] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:28:24.226 [2024-12-10 00:09:59.026437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79100) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.026445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.226 [2024-12-10 00:09:59.026449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79280) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.026453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.226 [2024-12-10 00:09:59.026458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79400) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.026462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.226 [2024-12-10 00:09:59.026466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.026470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.226 [2024-12-10 00:09:59.026477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.226 [2024-12-10 00:09:59.026489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.226 [2024-12-10 00:09:59.026500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.226 [2024-12-10 00:09:59.026566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.226 [2024-12-10 00:09:59.026572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.226 [2024-12-10 00:09:59.026575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.026584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.226 [2024-12-10 00:09:59.026596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.226 [2024-12-10 00:09:59.026608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.226 [2024-12-10 00:09:59.026678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.226 [2024-12-10 00:09:59.026683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.226 [2024-12-10 00:09:59.026686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.026694] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:28:24.226 [2024-12-10 00:09:59.026698] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:28:24.226 [2024-12-10 00:09:59.026706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.226 [2024-12-10 00:09:59.026718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.226 [2024-12-10 00:09:59.026727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.226 [2024-12-10 00:09:59.026790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.226 [2024-12-10 00:09:59.026796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.226 [2024-12-10 00:09:59.026799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.026812] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.226 [2024-12-10 00:09:59.026825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.226 [2024-12-10 00:09:59.026834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.226 [2024-12-10 00:09:59.026901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.226 [2024-12-10 00:09:59.026906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.226 [2024-12-10 00:09:59.026909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.026921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.026928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.226 [2024-12-10 00:09:59.026933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.226 [2024-12-10 00:09:59.026943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.226 [2024-12-10 00:09:59.027000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.226 [2024-12-10 00:09:59.027006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.226 [2024-12-10 00:09:59.027009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.027020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.226 [2024-12-10 00:09:59.027033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.226 [2024-12-10 00:09:59.027042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.226 [2024-12-10 00:09:59.027102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.226 [2024-12-10 00:09:59.027108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.226 [2024-12-10 00:09:59.027111] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.027123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.226 [2024-12-10 00:09:59.027135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.226 [2024-12-10 00:09:59.027144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.226 [2024-12-10 00:09:59.027220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.226 [2024-12-10 00:09:59.027226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.226 [2024-12-10 00:09:59.027229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.027242] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.226 [2024-12-10 00:09:59.027255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.226 [2024-12-10 00:09:59.027264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.226 [2024-12-10 00:09:59.027329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.226 [2024-12-10 00:09:59.027335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.226 [2024-12-10 00:09:59.027338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.027349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.226 [2024-12-10 00:09:59.027362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.226 [2024-12-10 00:09:59.027371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.226 [2024-12-10 00:09:59.027440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.226 [2024-12-10 00:09:59.027446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.226 [2024-12-10 00:09:59.027448] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.027460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.226 [2024-12-10 00:09:59.027472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.226 [2024-12-10 00:09:59.027481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.226 [2024-12-10 00:09:59.027548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.226 [2024-12-10 00:09:59.027553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.226 [2024-12-10 00:09:59.027556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.226 [2024-12-10 00:09:59.027568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.226 [2024-12-10 00:09:59.027575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.227 [2024-12-10 00:09:59.027580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.227 [2024-12-10 00:09:59.027589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.227 [2024-12-10 00:09:59.027656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.227 [2024-12-10 00:09:59.027662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.227 [2024-12-10 00:09:59.027665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.027668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.227 [2024-12-10 00:09:59.027676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.027681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.027684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.227 [2024-12-10 00:09:59.027690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.227 [2024-12-10 00:09:59.027699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.227 [2024-12-10 00:09:59.027766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.227 [2024-12-10 00:09:59.027771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.227 [2024-12-10 00:09:59.027774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.027777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.227 [2024-12-10 00:09:59.027786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.027789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.027792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.227 [2024-12-10 00:09:59.027798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.227 [2024-12-10 00:09:59.027807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.227 [2024-12-10 00:09:59.027874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.227 [2024-12-10 00:09:59.027880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.227 [2024-12-10 00:09:59.027883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.027886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.227 [2024-12-10 00:09:59.027894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.027898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.027901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.227 [2024-12-10 00:09:59.027906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.227 [2024-12-10 00:09:59.027916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.227 [2024-12-10 00:09:59.027976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.227 [2024-12-10 00:09:59.027982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.227 [2024-12-10 00:09:59.027985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.027988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.227 [2024-12-10 00:09:59.027997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.227 [2024-12-10 00:09:59.028009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.227 [2024-12-10 00:09:59.028018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.227 [2024-12-10 00:09:59.028081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.227 [2024-12-10 00:09:59.028087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.227 [2024-12-10 00:09:59.028090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.227 [2024-12-10 00:09:59.028101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.227 [2024-12-10 00:09:59.028115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.227 [2024-12-10 00:09:59.028124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.227 [2024-12-10 00:09:59.028192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.227 [2024-12-10 00:09:59.028198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.227 [2024-12-10 00:09:59.028201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.227 [2024-12-10 00:09:59.028212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.227 [2024-12-10 00:09:59.028225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.227 [2024-12-10 00:09:59.028234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.227 [2024-12-10 00:09:59.028299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.227 [2024-12-10 00:09:59.028305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.227 [2024-12-10 00:09:59.028308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028311] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.227 [2024-12-10 00:09:59.028319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028323] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.227 [2024-12-10 00:09:59.028332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.227 [2024-12-10 00:09:59.028341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.227 [2024-12-10 00:09:59.028403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.227 [2024-12-10 00:09:59.028409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.227 [2024-12-10 00:09:59.028412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.227 [2024-12-10 00:09:59.028424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.227 [2024-12-10 00:09:59.028436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.227 [2024-12-10 00:09:59.028445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.227 [2024-12-10 00:09:59.028505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.227 [2024-12-10 00:09:59.028511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.227 [2024-12-10 00:09:59.028514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.227 [2024-12-10 00:09:59.028525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.227 [2024-12-10 00:09:59.028539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.227 [2024-12-10 00:09:59.028548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.227 [2024-12-10 00:09:59.028607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.227 [2024-12-10 00:09:59.028612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.227 [2024-12-10 00:09:59.028615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.227 [2024-12-10 00:09:59.028618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.227 [2024-12-10 00:09:59.028626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.028630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.028633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.228 [2024-12-10 00:09:59.028638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.228 [2024-12-10 00:09:59.028648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.228 [2024-12-10 00:09:59.028717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.228 [2024-12-10 00:09:59.028722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.228 [2024-12-10 00:09:59.028725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.028729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.228 [2024-12-10 00:09:59.028737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.028740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.028743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.228 [2024-12-10 00:09:59.028749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.228 [2024-12-10 00:09:59.028758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.228 [2024-12-10 00:09:59.028822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.228 [2024-12-10 00:09:59.028828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.228 [2024-12-10 00:09:59.028831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.028834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.228 [2024-12-10 00:09:59.028842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.028845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.028848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.228 [2024-12-10 00:09:59.028854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.228 [2024-12-10 00:09:59.028863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.228 [2024-12-10 00:09:59.028926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.228 [2024-12-10 00:09:59.028932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.228 [2024-12-10 00:09:59.028935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.028938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.228 [2024-12-10 00:09:59.028946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.028949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.028952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.228 [2024-12-10 00:09:59.028958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.228 [2024-12-10 00:09:59.028969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.228 [2024-12-10 00:09:59.029035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.228 [2024-12-10 00:09:59.029041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.228 [2024-12-10 00:09:59.029044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.029047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.228 [2024-12-10 00:09:59.029055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.029059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.029062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.228 [2024-12-10 00:09:59.029067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.228 [2024-12-10 00:09:59.029076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.228 [2024-12-10 00:09:59.029136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.228 [2024-12-10 00:09:59.029142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.228 [2024-12-10 00:09:59.029145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.029148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.228 [2024-12-10 00:09:59.033160] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.033166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.033169] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b17690) 00:28:24.228 [2024-12-10 00:09:59.033175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.228 [2024-12-10 00:09:59.033186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b79580, cid 3, qid 0 00:28:24.228 [2024-12-10 00:09:59.033315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:24.228 [2024-12-10 00:09:59.033321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:24.228 [2024-12-10 00:09:59.033324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:24.228 [2024-12-10 00:09:59.033327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b79580) on tqpair=0x1b17690 00:28:24.228 [2024-12-10 00:09:59.033334] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:28:24.228 0% 00:28:24.228 Data Units Read: 0 00:28:24.228 Data Units Written: 0 00:28:24.228 Host Read Commands: 0 00:28:24.228 Host Write Commands: 0 00:28:24.228 Controller Busy Time: 0 minutes 00:28:24.228 Power Cycles: 0 00:28:24.228 Power On Hours: 0 hours 00:28:24.228 Unsafe Shutdowns: 0 00:28:24.228 Unrecoverable Media Errors: 0 00:28:24.228 Lifetime Error Log Entries: 0 00:28:24.228 Warning Temperature Time: 0 minutes 00:28:24.228 Critical Temperature Time: 0 minutes 00:28:24.228 00:28:24.228 Number of Queues 00:28:24.228 ================ 00:28:24.228 Number of I/O Submission Queues: 127 00:28:24.228 Number of I/O Completion Queues: 127 00:28:24.228 00:28:24.228 Active Namespaces 00:28:24.228 ================= 00:28:24.228 Namespace ID:1 00:28:24.228 Error Recovery Timeout: Unlimited 00:28:24.228 Command Set Identifier: NVM (00h) 00:28:24.228 Deallocate: Supported 00:28:24.228 Deallocated/Unwritten Error: Not Supported 00:28:24.228 Deallocated Read Value: Unknown 00:28:24.228 Deallocate in Write Zeroes: Not Supported 00:28:24.228 Deallocated Guard Field: 0xFFFF 00:28:24.228 Flush: Supported 00:28:24.228 Reservation: Supported 00:28:24.228 Namespace Sharing Capabilities: Multiple Controllers 00:28:24.228 Size (in LBAs): 131072 (0GiB) 00:28:24.228 Capacity (in LBAs): 131072 (0GiB) 00:28:24.228 Utilization (in LBAs): 131072 (0GiB) 00:28:24.228 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:24.228 EUI64: ABCDEF0123456789 00:28:24.228 UUID: 9cc4201a-6857-4a76-8065-9f04b960a07e 00:28:24.228 Thin Provisioning: Not Supported 00:28:24.228 Per-NS Atomic Units: Yes 00:28:24.228 Atomic Boundary Size (Normal): 0 00:28:24.228 Atomic Boundary Size (PFail): 0 00:28:24.228 Atomic Boundary Offset: 0 00:28:24.228 Maximum Single Source Range Length: 65535 00:28:24.228 Maximum Copy Length: 65535 00:28:24.228 Maximum Source Range Count: 1 00:28:24.228 NGUID/EUI64 Never Reused: No 00:28:24.228 Namespace Write Protected: No 00:28:24.228 Number of LBA Formats: 1 00:28:24.228 Current LBA Format: LBA Format #00 00:28:24.228 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:24.228 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:24.228 rmmod nvme_tcp 00:28:24.228 rmmod nvme_fabrics 00:28:24.228 rmmod nvme_keyring 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 452195 ']' 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 452195 00:28:24.228 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 452195 ']' 00:28:24.229 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 452195 00:28:24.229 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:28:24.229 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.229 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 452195 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 452195' 00:28:24.488 killing process with pid 452195 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 452195 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 452195 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.488 00:09:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:27.028 00:28:27.028 real 0m9.297s 00:28:27.028 user 0m5.456s 00:28:27.028 sys 0m4.799s 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:27.028 ************************************ 00:28:27.028 END TEST nvmf_identify 00:28:27.028 ************************************ 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.028 ************************************ 00:28:27.028 START TEST nvmf_perf 00:28:27.028 ************************************ 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:27.028 * Looking for test storage... 00:28:27.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:27.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.028 --rc genhtml_branch_coverage=1 00:28:27.028 --rc genhtml_function_coverage=1 00:28:27.028 --rc genhtml_legend=1 00:28:27.028 --rc geninfo_all_blocks=1 00:28:27.028 --rc geninfo_unexecuted_blocks=1 00:28:27.028 00:28:27.028 ' 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:27.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.028 --rc genhtml_branch_coverage=1 00:28:27.028 --rc genhtml_function_coverage=1 00:28:27.028 --rc genhtml_legend=1 00:28:27.028 --rc geninfo_all_blocks=1 00:28:27.028 --rc geninfo_unexecuted_blocks=1 00:28:27.028 00:28:27.028 ' 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:27.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.028 --rc genhtml_branch_coverage=1 00:28:27.028 --rc genhtml_function_coverage=1 00:28:27.028 --rc genhtml_legend=1 00:28:27.028 --rc geninfo_all_blocks=1 00:28:27.028 --rc geninfo_unexecuted_blocks=1 00:28:27.028 00:28:27.028 ' 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:27.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:27.028 --rc genhtml_branch_coverage=1 00:28:27.028 --rc genhtml_function_coverage=1 00:28:27.028 --rc genhtml_legend=1 00:28:27.028 --rc geninfo_all_blocks=1 00:28:27.028 --rc geninfo_unexecuted_blocks=1 00:28:27.028 00:28:27.028 ' 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.028 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:27.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:27.029 00:10:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.602 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:33.603 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:33.603 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:33.603 Found net devices under 0000:86:00.0: cvl_0_0 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:33.603 Found net devices under 0000:86:00.1: cvl_0_1 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:28:33.603 00:28:33.603 --- 10.0.0.2 ping statistics --- 00:28:33.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.603 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:28:33.603 00:28:33.603 --- 10.0.0.1 ping statistics --- 00:28:33.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.603 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=455955 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 455955 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 455955 ']' 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:33.603 [2024-12-10 00:10:07.655965] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:33.603 [2024-12-10 00:10:07.656008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.603 [2024-12-10 00:10:07.735377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.603 [2024-12-10 00:10:07.778340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.603 [2024-12-10 00:10:07.778375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.603 [2024-12-10 00:10:07.778382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.603 [2024-12-10 00:10:07.778389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.603 [2024-12-10 00:10:07.778394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.603 [2024-12-10 00:10:07.779819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.603 [2024-12-10 00:10:07.779855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.603 [2024-12-10 00:10:07.779961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.603 [2024-12-10 00:10:07.779962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.603 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:28:33.604 00:10:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py load_subsystem_config 00:28:36.137 00:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py framework_get_config bdev 00:28:36.137 00:10:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:36.396 00:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:28:36.396 00:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:36.654 00:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:36.654 00:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:28:36.654 00:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:36.654 00:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:36.654 00:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:36.654 [2024-12-10 00:10:11.565162] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.913 00:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:36.913 00:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:36.913 00:10:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:37.171 00:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:37.171 00:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:37.431 00:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.693 [2024-12-10 00:10:12.388165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.693 00:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:37.693 00:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:28:37.693 00:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:28:37.693 00:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:37.693 00:10:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:28:39.074 Initializing NVMe Controllers 00:28:39.074 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:28:39.074 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:28:39.074 Initialization complete. Launching workers. 00:28:39.074 ======================================================== 00:28:39.074 Latency(us) 00:28:39.074 Device Information : IOPS MiB/s Average min max 00:28:39.074 PCIE (0000:5e:00.0) NSID 1 from core 0: 97152.92 379.50 328.91 35.57 7264.89 00:28:39.074 ======================================================== 00:28:39.074 Total : 97152.92 379.50 328.91 35.57 7264.89 00:28:39.074 00:28:39.074 00:10:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:40.451 Initializing NVMe Controllers 00:28:40.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:40.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:40.451 Initialization complete. Launching workers. 00:28:40.451 ======================================================== 00:28:40.451 Latency(us) 00:28:40.451 Device Information : IOPS MiB/s Average min max 00:28:40.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 272.04 1.06 3819.11 120.62 45921.84 00:28:40.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.84 0.18 22666.20 5198.47 48875.93 00:28:40.451 ======================================================== 00:28:40.451 Total : 317.87 1.24 6536.87 120.62 48875.93 00:28:40.451 00:28:40.451 00:10:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.828 Initializing NVMe Controllers 00:28:41.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:41.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:41.828 Initialization complete. Launching workers. 00:28:41.828 ======================================================== 00:28:41.828 Latency(us) 00:28:41.828 Device Information : IOPS MiB/s Average min max 00:28:41.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10883.86 42.52 2939.48 520.22 9435.29 00:28:41.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3964.03 15.48 8109.23 4400.26 16049.18 00:28:41.828 ======================================================== 00:28:41.828 Total : 14847.88 58.00 4319.68 520.22 16049.18 00:28:41.828 00:28:41.828 00:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:41.828 00:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:41.828 00:10:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:44.360 Initializing NVMe Controllers 00:28:44.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.361 Controller IO queue size 128, less than required. 00:28:44.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.361 Controller IO queue size 128, less than required. 00:28:44.361 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:44.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:44.361 Initialization complete. Launching workers. 00:28:44.361 ======================================================== 00:28:44.361 Latency(us) 00:28:44.361 Device Information : IOPS MiB/s Average min max 00:28:44.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1812.96 453.24 71765.10 47624.34 130379.87 00:28:44.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.49 149.37 223011.38 88075.23 326399.02 00:28:44.361 ======================================================== 00:28:44.361 Total : 2410.44 602.61 109255.11 47624.34 326399.02 00:28:44.361 00:28:44.361 00:10:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:44.619 No valid NVMe controllers or AIO or URING devices found 00:28:44.619 Initializing NVMe Controllers 00:28:44.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:44.619 Controller IO queue size 128, less than required. 00:28:44.619 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.619 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:44.619 Controller IO queue size 128, less than required. 00:28:44.619 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:44.619 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:44.619 WARNING: Some requested NVMe devices were skipped 00:28:44.619 00:10:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:47.151 Initializing NVMe Controllers 00:28:47.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:47.151 Controller IO queue size 128, less than required. 00:28:47.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:47.151 Controller IO queue size 128, less than required. 00:28:47.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:47.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:47.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:47.151 Initialization complete. Launching workers. 00:28:47.151 00:28:47.151 ==================== 00:28:47.151 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:47.151 TCP transport: 00:28:47.151 polls: 11616 00:28:47.151 idle_polls: 7828 00:28:47.151 sock_completions: 3788 00:28:47.151 nvme_completions: 6133 00:28:47.151 submitted_requests: 9152 00:28:47.151 queued_requests: 1 00:28:47.151 00:28:47.151 ==================== 00:28:47.151 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:47.151 TCP transport: 00:28:47.151 polls: 11621 00:28:47.151 idle_polls: 7763 00:28:47.151 sock_completions: 3858 00:28:47.151 nvme_completions: 6359 00:28:47.151 submitted_requests: 9516 00:28:47.151 queued_requests: 1 00:28:47.151 ======================================================== 00:28:47.151 Latency(us) 00:28:47.151 Device Information : IOPS MiB/s Average min max 00:28:47.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1531.35 382.84 85753.33 57681.35 144952.25 00:28:47.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1587.79 396.95 80684.04 41183.60 127581.13 00:28:47.151 ======================================================== 00:28:47.151 Total : 3119.14 779.79 83172.82 41183.60 144952.25 00:28:47.151 00:28:47.409 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:47.409 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:47.409 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:28:47.409 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:47.409 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:28:47.409 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.409 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:28:47.409 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.409 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:28:47.409 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.409 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.409 rmmod nvme_tcp 00:28:47.409 rmmod nvme_fabrics 00:28:47.409 rmmod nvme_keyring 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 455955 ']' 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 455955 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 455955 ']' 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 455955 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 455955 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 455955' 00:28:47.668 killing process with pid 455955 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 455955 00:28:47.668 00:10:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 455955 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.045 00:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.584 00:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.584 00:28:51.584 real 0m24.489s 00:28:51.584 user 1m4.200s 00:28:51.584 sys 0m8.218s 00:28:51.584 00:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.584 00:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:51.584 ************************************ 00:28:51.584 END TEST nvmf_perf 00:28:51.584 ************************************ 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.584 ************************************ 00:28:51.584 START TEST nvmf_fio_host 00:28:51.584 ************************************ 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:51.584 * Looking for test storage... 00:28:51.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:51.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.584 --rc genhtml_branch_coverage=1 00:28:51.584 --rc genhtml_function_coverage=1 00:28:51.584 --rc genhtml_legend=1 00:28:51.584 --rc geninfo_all_blocks=1 00:28:51.584 --rc geninfo_unexecuted_blocks=1 00:28:51.584 00:28:51.584 ' 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:51.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.584 --rc genhtml_branch_coverage=1 00:28:51.584 --rc genhtml_function_coverage=1 00:28:51.584 --rc genhtml_legend=1 00:28:51.584 --rc geninfo_all_blocks=1 00:28:51.584 --rc geninfo_unexecuted_blocks=1 00:28:51.584 00:28:51.584 ' 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:51.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.584 --rc genhtml_branch_coverage=1 00:28:51.584 --rc genhtml_function_coverage=1 00:28:51.584 --rc genhtml_legend=1 00:28:51.584 --rc geninfo_all_blocks=1 00:28:51.584 --rc geninfo_unexecuted_blocks=1 00:28:51.584 00:28:51.584 ' 00:28:51.584 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:51.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.584 --rc genhtml_branch_coverage=1 00:28:51.584 --rc genhtml_function_coverage=1 00:28:51.584 --rc genhtml_legend=1 00:28:51.585 --rc geninfo_all_blocks=1 00:28:51.585 --rc geninfo_unexecuted_blocks=1 00:28:51.585 00:28:51.585 ' 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:51.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.585 00:10:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:58.159 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:58.159 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:58.159 Found net devices under 0000:86:00.0: cvl_0_0 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:58.159 Found net devices under 0000:86:00.1: cvl_0_1 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.159 00:10:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.159 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.159 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.159 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.159 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.159 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.159 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:28:58.160 00:28:58.160 --- 10.0.0.2 ping statistics --- 00:28:58.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.160 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:28:58.160 00:28:58.160 --- 10.0.0.1 ping statistics --- 00:28:58.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.160 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=462062 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 462062 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 462062 ']' 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.160 [2024-12-10 00:10:32.316972] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:58.160 [2024-12-10 00:10:32.317023] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.160 [2024-12-10 00:10:32.397056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.160 [2024-12-10 00:10:32.437902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.160 [2024-12-10 00:10:32.437937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.160 [2024-12-10 00:10:32.437945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.160 [2024-12-10 00:10:32.437952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.160 [2024-12-10 00:10:32.437958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.160 [2024-12-10 00:10:32.439526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.160 [2024-12-10 00:10:32.439558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.160 [2024-12-10 00:10:32.439663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.160 [2024-12-10 00:10:32.439665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:58.160 [2024-12-10 00:10:32.706543] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:58.160 Malloc1 00:28:58.160 00:10:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:58.419 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:58.678 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.678 [2024-12-10 00:10:33.585254] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme' 00:28:58.937 00:10:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:59.509 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:59.509 fio-3.35 00:28:59.509 Starting 1 thread 00:29:02.038 00:29:02.038 test: (groupid=0, jobs=1): err= 0: pid=462448: Tue Dec 10 00:10:36 2024 00:29:02.038 read: IOPS=11.7k, BW=45.7MiB/s (47.9MB/s)(91.6MiB/2005msec) 00:29:02.038 slat (nsec): min=1594, max=236919, avg=1728.67, stdev=2165.55 00:29:02.038 clat (usec): min=3201, max=10572, avg=6042.27, stdev=470.13 00:29:02.039 lat (usec): min=3235, max=10574, avg=6043.99, stdev=470.03 00:29:02.039 clat percentiles (usec): 00:29:02.039 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5669], 00:29:02.039 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:29:02.039 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6783], 00:29:02.039 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8717], 99.95th=[ 9634], 00:29:02.039 | 99.99th=[10159] 00:29:02.039 bw ( KiB/s): min=45776, max=47376, per=99.95%, avg=46780.00, stdev=723.30, samples=4 00:29:02.039 iops : min=11444, max=11844, avg=11695.00, stdev=180.82, samples=4 00:29:02.039 write: IOPS=11.6k, BW=45.4MiB/s (47.6MB/s)(91.0MiB/2005msec); 0 zone resets 00:29:02.039 slat (nsec): min=1635, max=226703, avg=1794.12, stdev=1653.08 00:29:02.039 clat (usec): min=2443, max=9478, avg=4878.49, stdev=368.06 00:29:02.039 lat (usec): min=2458, max=9480, avg=4880.29, stdev=367.99 00:29:02.039 clat percentiles (usec): 00:29:02.039 | 1.00th=[ 4015], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:29:02.039 | 30.00th=[ 4686], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 4948], 00:29:02.039 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:29:02.039 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 6718], 99.95th=[ 7767], 00:29:02.039 | 99.99th=[ 8717] 00:29:02.039 bw ( KiB/s): min=46168, max=46912, per=100.00%, avg=46484.00, stdev=358.75, samples=4 00:29:02.039 iops : min=11542, max=11728, avg=11621.00, stdev=89.69, samples=4 00:29:02.039 lat (msec) : 4=0.47%, 10=99.51%, 20=0.02% 00:29:02.039 cpu : usr=74.05%, sys=25.00%, ctx=111, majf=0, minf=2 00:29:02.039 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:02.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:02.039 issued rwts: total=23461,23298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.039 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:02.039 00:29:02.039 Run status group 0 (all jobs): 00:29:02.039 READ: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=91.6MiB (96.1MB), run=2005-2005msec 00:29:02.039 WRITE: bw=45.4MiB/s (47.6MB/s), 45.4MiB/s-45.4MiB/s (47.6MB/s-47.6MB/s), io=91.0MiB (95.4MB), run=2005-2005msec 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme' 00:29:02.039 00:10:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:02.039 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:02.039 fio-3.35 00:29:02.039 Starting 1 thread 00:29:04.568 00:29:04.568 test: (groupid=0, jobs=1): err= 0: pid=463019: Tue Dec 10 00:10:39 2024 00:29:04.568 read: IOPS=10.9k, BW=171MiB/s (179MB/s)(343MiB/2006msec) 00:29:04.568 slat (nsec): min=2548, max=88429, avg=2838.29, stdev=1275.49 00:29:04.568 clat (usec): min=1243, max=13036, avg=6684.64, stdev=1515.39 00:29:04.568 lat (usec): min=1246, max=13051, avg=6687.48, stdev=1515.51 00:29:04.568 clat percentiles (usec): 00:29:04.568 | 1.00th=[ 3490], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5342], 00:29:04.568 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:29:04.568 | 70.00th=[ 7504], 80.00th=[ 7832], 90.00th=[ 8455], 95.00th=[ 9241], 00:29:04.568 | 99.00th=[10552], 99.50th=[10945], 99.90th=[11731], 99.95th=[12518], 00:29:04.568 | 99.99th=[13042] 00:29:04.568 bw ( KiB/s): min=83904, max=94690, per=50.60%, avg=88584.50, stdev=5212.39, samples=4 00:29:04.568 iops : min= 5244, max= 5918, avg=5536.50, stdev=325.73, samples=4 00:29:04.568 write: IOPS=6376, BW=99.6MiB/s (104MB/s)(181MiB/1821msec); 0 zone resets 00:29:04.568 slat (usec): min=29, max=378, avg=32.07, stdev= 7.58 00:29:04.568 clat (usec): min=3371, max=15056, avg=8671.25, stdev=1545.80 00:29:04.568 lat (usec): min=3401, max=15172, avg=8703.32, stdev=1547.47 00:29:04.568 clat percentiles (usec): 00:29:04.568 | 1.00th=[ 5735], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7373], 00:29:04.568 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:29:04.568 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11469], 00:29:04.568 | 99.00th=[13042], 99.50th=[13960], 99.90th=[14746], 99.95th=[14877], 00:29:04.568 | 99.99th=[15008] 00:29:04.568 bw ( KiB/s): min=88896, max=98459, per=90.55%, avg=92382.75, stdev=4360.88, samples=4 00:29:04.568 iops : min= 5556, max= 6153, avg=5773.75, stdev=272.24, samples=4 00:29:04.568 lat (msec) : 2=0.04%, 4=2.04%, 10=90.18%, 20=7.74% 00:29:04.568 cpu : usr=85.84%, sys=13.52%, ctx=29, majf=0, minf=2 00:29:04.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:04.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:04.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:04.568 issued rwts: total=21951,11611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:04.568 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:04.568 00:29:04.568 Run status group 0 (all jobs): 00:29:04.568 READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=343MiB (360MB), run=2006-2006msec 00:29:04.568 WRITE: bw=99.6MiB/s (104MB/s), 99.6MiB/s-99.6MiB/s (104MB/s-104MB/s), io=181MiB (190MB), run=1821-1821msec 00:29:04.568 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.569 rmmod nvme_tcp 00:29:04.569 rmmod nvme_fabrics 00:29:04.569 rmmod nvme_keyring 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 462062 ']' 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 462062 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 462062 ']' 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 462062 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.569 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 462062 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 462062' 00:29:04.839 killing process with pid 462062 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 462062 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 462062 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.839 00:10:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.461 00:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.461 00:29:07.461 real 0m15.726s 00:29:07.461 user 0m45.880s 00:29:07.461 sys 0m6.415s 00:29:07.461 00:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.461 00:10:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.461 ************************************ 00:29:07.461 END TEST nvmf_fio_host 00:29:07.461 ************************************ 00:29:07.461 00:10:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:07.461 00:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:07.461 00:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.461 00:10:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.461 ************************************ 00:29:07.461 START TEST nvmf_failover 00:29:07.461 ************************************ 00:29:07.461 00:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:07.461 * Looking for test storage... 00:29:07.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:29:07.461 00:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:07.461 00:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:29:07.461 00:10:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:29:07.461 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.462 --rc genhtml_branch_coverage=1 00:29:07.462 --rc genhtml_function_coverage=1 00:29:07.462 --rc genhtml_legend=1 00:29:07.462 --rc geninfo_all_blocks=1 00:29:07.462 --rc geninfo_unexecuted_blocks=1 00:29:07.462 00:29:07.462 ' 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.462 --rc genhtml_branch_coverage=1 00:29:07.462 --rc genhtml_function_coverage=1 00:29:07.462 --rc genhtml_legend=1 00:29:07.462 --rc geninfo_all_blocks=1 00:29:07.462 --rc geninfo_unexecuted_blocks=1 00:29:07.462 00:29:07.462 ' 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.462 --rc genhtml_branch_coverage=1 00:29:07.462 --rc genhtml_function_coverage=1 00:29:07.462 --rc genhtml_legend=1 00:29:07.462 --rc geninfo_all_blocks=1 00:29:07.462 --rc geninfo_unexecuted_blocks=1 00:29:07.462 00:29:07.462 ' 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.462 --rc genhtml_branch_coverage=1 00:29:07.462 --rc genhtml_function_coverage=1 00:29:07.462 --rc genhtml_legend=1 00:29:07.462 --rc geninfo_all_blocks=1 00:29:07.462 --rc geninfo_unexecuted_blocks=1 00:29:07.462 00:29:07.462 ' 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:07.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.462 00:10:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.904 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:12.905 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:12.905 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:12.905 Found net devices under 0000:86:00.0: cvl_0_0 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:12.905 Found net devices under 0000:86:00.1: cvl_0_1 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.905 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:29:13.170 00:29:13.170 --- 10.0.0.2 ping statistics --- 00:29:13.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.170 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:29:13.170 00:29:13.170 --- 10.0.0.1 ping statistics --- 00:29:13.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.170 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=467011 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 467011 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 467011 ']' 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.170 00:10:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:13.170 [2024-12-10 00:10:48.022602] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:13.170 [2024-12-10 00:10:48.022649] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:13.170 [2024-12-10 00:10:48.087168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:13.434 [2024-12-10 00:10:48.129676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:13.434 [2024-12-10 00:10:48.129711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:13.434 [2024-12-10 00:10:48.129718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:13.434 [2024-12-10 00:10:48.129724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:13.434 [2024-12-10 00:10:48.129731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:13.434 [2024-12-10 00:10:48.131027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.434 [2024-12-10 00:10:48.131131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.434 [2024-12-10 00:10:48.131132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:13.434 00:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.434 00:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:29:13.434 00:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.434 00:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.434 00:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:13.434 00:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.434 00:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:13.709 [2024-12-10 00:10:48.439899] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.709 00:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:13.988 Malloc0 00:29:13.988 00:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:13.988 00:10:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:14.286 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.557 [2024-12-10 00:10:49.276673] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.557 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:14.557 [2024-12-10 00:10:49.473219] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:14.827 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:14.827 [2024-12-10 00:10:49.669847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:14.827 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=467277 00:29:14.827 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:14.827 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.827 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 467277 /var/tmp/bdevperf.sock 00:29:14.827 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 467277 ']' 00:29:14.827 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:14.827 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.827 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:14.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:14.827 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.827 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:15.103 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.103 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:29:15.103 00:10:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:15.407 NVMe0n1 00:29:15.407 00:10:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:15.702 00:29:15.702 00:10:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=467505 00:29:15.702 00:10:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:15.702 00:10:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:16.720 00:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.015 [2024-12-10 00:10:51.737101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7e20 is same with the state(6) to be set 00:29:17.015 [2024-12-10 00:10:51.737174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7e20 is same with the state(6) to be set 00:29:17.015 [2024-12-10 00:10:51.737183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7e20 is same with the state(6) to be set 00:29:17.015 [2024-12-10 00:10:51.737190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7e20 is same with the state(6) to be set 00:29:17.015 [2024-12-10 00:10:51.737196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7e20 is same with the state(6) to be set 00:29:17.015 [2024-12-10 00:10:51.737203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7e20 is same with the state(6) to be set 00:29:17.015 [2024-12-10 00:10:51.737209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7e20 is same with the state(6) to be set 00:29:17.015 [2024-12-10 00:10:51.737215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7e20 is same with the state(6) to be set 00:29:17.015 [2024-12-10 00:10:51.737221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7e20 is same with the state(6) to be set 00:29:17.015 00:10:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:29:20.415 00:10:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:20.415 00:29:20.415 00:10:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:20.415 [2024-12-10 00:10:55.257135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 [2024-12-10 00:10:55.257578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb8c40 is same with the state(6) to be set 00:29:20.415 00:10:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:29:23.710 00:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.710 [2024-12-10 00:10:58.483539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.710 00:10:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:29:24.645 00:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:24.903 [2024-12-10 00:10:59.700317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb9800 is same with the state(6) to be set 00:29:24.903 00:10:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 467505 00:29:31.485 { 00:29:31.485 "results": [ 00:29:31.485 { 00:29:31.485 "job": "NVMe0n1", 00:29:31.485 "core_mask": "0x1", 00:29:31.485 "workload": "verify", 00:29:31.485 "status": "finished", 00:29:31.485 "verify_range": { 00:29:31.485 "start": 0, 00:29:31.485 "length": 16384 00:29:31.485 }, 00:29:31.485 "queue_depth": 128, 00:29:31.485 "io_size": 4096, 00:29:31.485 "runtime": 15.00499, 00:29:31.485 "iops": 11071.850097867442, 00:29:31.485 "mibps": 43.249414444794695, 00:29:31.485 "io_failed": 5805, 00:29:31.485 "io_timeout": 0, 00:29:31.485 "avg_latency_us": 11147.828282449638, 00:29:31.485 "min_latency_us": 427.4086956521739, 00:29:31.485 "max_latency_us": 30545.474782608697 00:29:31.485 } 00:29:31.485 ], 00:29:31.485 "core_count": 1 00:29:31.485 } 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 467277 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 467277 ']' 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 467277 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 467277 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 467277' 00:29:31.485 killing process with pid 467277 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 467277 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 467277 00:29:31.485 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:29:31.485 [2024-12-10 00:10:49.746376] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:31.485 [2024-12-10 00:10:49.746427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467277 ] 00:29:31.485 [2024-12-10 00:10:49.823703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.485 [2024-12-10 00:10:49.864049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.485 Running I/O for 15 seconds... 00:29:31.485 11287.00 IOPS, 44.09 MiB/s [2024-12-09T23:11:06.421Z] [2024-12-10 00:10:51.739342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.485 [2024-12-10 00:10:51.739376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.485 [2024-12-10 00:10:51.739400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.485 [2024-12-10 00:10:51.739418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.485 [2024-12-10 00:10:51.739435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.485 [2024-12-10 00:10:51.739454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.485 [2024-12-10 00:10:51.739469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.485 [2024-12-10 00:10:51.739485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.485 [2024-12-10 00:10:51.739502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.485 [2024-12-10 00:10:51.739518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.485 [2024-12-10 00:10:51.739535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.485 [2024-12-10 00:10:51.739552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.485 [2024-12-10 00:10:51.739576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.485 [2024-12-10 00:10:51.739592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.485 [2024-12-10 00:10:51.739610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.485 [2024-12-10 00:10:51.739626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.485 [2024-12-10 00:10:51.739641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.485 [2024-12-10 00:10:51.739656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.485 [2024-12-10 00:10:51.739673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.485 [2024-12-10 00:10:51.739689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.485 [2024-12-10 00:10:51.739704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.485 [2024-12-10 00:10:51.739712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.485 [2024-12-10 00:10:51.739720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.739984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.739994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.486 [2024-12-10 00:10:51.740313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.486 [2024-12-10 00:10:51.740342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100008 len:8 PRP1 0x0 PRP2 0x0 00:29:31.486 [2024-12-10 00:10:51.740349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.486 [2024-12-10 00:10:51.740359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100016 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100024 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100032 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100040 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100048 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100056 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100064 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100072 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100080 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100088 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100096 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100104 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100112 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100120 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100128 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100136 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100144 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100152 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100160 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100168 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100176 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100184 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100192 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.487 [2024-12-10 00:10:51.740933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.487 [2024-12-10 00:10:51.740938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.487 [2024-12-10 00:10:51.740943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100200 len:8 PRP1 0x0 PRP2 0x0 00:29:31.487 [2024-12-10 00:10:51.740951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.740958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.740962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.740968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100208 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.740975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.740982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.740988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.740993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100216 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.740999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100224 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100232 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100240 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100248 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100256 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100264 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100272 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100280 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100288 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100296 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100304 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100312 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100320 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100328 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100336 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100344 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100352 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100360 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100368 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100376 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.741510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100384 len:8 PRP1 0x0 PRP2 0x0 00:29:31.488 [2024-12-10 00:10:51.741516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.488 [2024-12-10 00:10:51.741525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.488 [2024-12-10 00:10:51.741530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.488 [2024-12-10 00:10:51.753035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100392 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100400 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100408 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100416 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100424 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100432 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100440 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100448 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100456 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100464 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100472 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100480 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100488 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100496 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99568 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99576 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99584 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99592 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99600 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99608 len:8 PRP1 0x0 PRP2 0x0 00:29:31.489 [2024-12-10 00:10:51.753719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.489 [2024-12-10 00:10:51.753728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.489 [2024-12-10 00:10:51.753736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.489 [2024-12-10 00:10:51.753744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99616 len:8 PRP1 0x0 PRP2 0x0 00:29:31.490 [2024-12-10 00:10:51.753753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:51.753803] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:31.490 [2024-12-10 00:10:51.753834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.490 [2024-12-10 00:10:51.753845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:51.753856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.490 [2024-12-10 00:10:51.753865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:51.753876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.490 [2024-12-10 00:10:51.753885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:51.753898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.490 [2024-12-10 00:10:51.753908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:51.753917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:31.490 [2024-12-10 00:10:51.753951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95afa0 (9): Bad file descriptor 00:29:31.490 [2024-12-10 00:10:51.757854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:31.490 [2024-12-10 00:10:51.826093] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:29:31.490 10776.00 IOPS, 42.09 MiB/s [2024-12-09T23:11:06.426Z] 10924.33 IOPS, 42.67 MiB/s [2024-12-09T23:11:06.426Z] 10958.25 IOPS, 42.81 MiB/s [2024-12-09T23:11:06.426Z] [2024-12-10 00:10:55.259686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.490 [2024-12-10 00:10:55.259718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.259741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.259759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.259777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.259794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.259809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.259827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.259844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.259861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.490 [2024-12-10 00:10:55.259882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.490 [2024-12-10 00:10:55.259900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.490 [2024-12-10 00:10:55.259916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.490 [2024-12-10 00:10:55.259935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.490 [2024-12-10 00:10:55.259953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.490 [2024-12-10 00:10:55.259969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.490 [2024-12-10 00:10:55.259985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.259994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.490 [2024-12-10 00:10:55.260002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.490 [2024-12-10 00:10:55.260266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.490 [2024-12-10 00:10:55.260275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.491 [2024-12-10 00:10:55.260737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.491 [2024-12-10 00:10:55.260755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.491 [2024-12-10 00:10:55.260772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.491 [2024-12-10 00:10:55.260788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.491 [2024-12-10 00:10:55.260804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.491 [2024-12-10 00:10:55.260820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.491 [2024-12-10 00:10:55.260957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.491 [2024-12-10 00:10:55.260967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.260974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.260984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.260992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.492 [2024-12-10 00:10:55.261379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.492 [2024-12-10 00:10:55.261406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38544 len:8 PRP1 0x0 PRP2 0x0 00:29:31.492 [2024-12-10 00:10:55.261415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.492 [2024-12-10 00:10:55.261432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.492 [2024-12-10 00:10:55.261439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38552 len:8 PRP1 0x0 PRP2 0x0 00:29:31.492 [2024-12-10 00:10:55.261447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.492 [2024-12-10 00:10:55.261461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.492 [2024-12-10 00:10:55.261468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38560 len:8 PRP1 0x0 PRP2 0x0 00:29:31.492 [2024-12-10 00:10:55.261475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.492 [2024-12-10 00:10:55.261488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.492 [2024-12-10 00:10:55.261495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38568 len:8 PRP1 0x0 PRP2 0x0 00:29:31.492 [2024-12-10 00:10:55.261501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.492 [2024-12-10 00:10:55.261516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.492 [2024-12-10 00:10:55.261522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38576 len:8 PRP1 0x0 PRP2 0x0 00:29:31.492 [2024-12-10 00:10:55.261529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.492 [2024-12-10 00:10:55.261543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.492 [2024-12-10 00:10:55.261549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38584 len:8 PRP1 0x0 PRP2 0x0 00:29:31.492 [2024-12-10 00:10:55.261556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.492 [2024-12-10 00:10:55.261570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.492 [2024-12-10 00:10:55.261577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38592 len:8 PRP1 0x0 PRP2 0x0 00:29:31.492 [2024-12-10 00:10:55.261584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.492 [2024-12-10 00:10:55.261597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.492 [2024-12-10 00:10:55.261603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38600 len:8 PRP1 0x0 PRP2 0x0 00:29:31.492 [2024-12-10 00:10:55.261611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.492 [2024-12-10 00:10:55.261624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.492 [2024-12-10 00:10:55.261633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38608 len:8 PRP1 0x0 PRP2 0x0 00:29:31.492 [2024-12-10 00:10:55.261640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.492 [2024-12-10 00:10:55.261647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.492 [2024-12-10 00:10:55.261654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.492 [2024-12-10 00:10:55.261659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38616 len:8 PRP1 0x0 PRP2 0x0 00:29:31.492 [2024-12-10 00:10:55.261668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.261685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.261692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38624 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.261700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.261717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.261725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38632 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.261732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.261750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.261756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38640 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.261764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.261776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.261782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38648 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.261790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.261804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.261811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38656 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.261823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.261836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.261842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38664 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.261850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.261867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.261873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38672 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.261880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.261894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.261900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38680 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.261908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.261922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.261928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38688 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.261935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.261947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.261954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38696 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.261961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.261976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.261982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38704 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.261989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.261996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.262003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.262009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38712 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.262016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.262025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.262030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.262036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38720 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.262045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.262053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.262058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.262064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38728 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.262076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.262084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.262090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.272092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38736 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.272103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.272111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.272118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.272124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38744 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.272130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.272137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.272142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.272148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38752 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.272155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.272165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.272170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.272176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38760 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.272182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.272189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.272195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.272201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38768 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.272208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.272214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.272219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.272226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38776 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.272233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.272240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.493 [2024-12-10 00:10:55.272245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.493 [2024-12-10 00:10:55.272251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38784 len:8 PRP1 0x0 PRP2 0x0 00:29:31.493 [2024-12-10 00:10:55.272260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.272317] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:31.493 [2024-12-10 00:10:55.272344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.493 [2024-12-10 00:10:55.272356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.272367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.493 [2024-12-10 00:10:55.272375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.272383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.493 [2024-12-10 00:10:55.272390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.493 [2024-12-10 00:10:55.272398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.494 [2024-12-10 00:10:55.272406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:55.272414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:29:31.494 [2024-12-10 00:10:55.272445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95afa0 (9): Bad file descriptor 00:29:31.494 [2024-12-10 00:10:55.275374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:29:31.494 [2024-12-10 00:10:55.302473] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:29:31.494 10893.40 IOPS, 42.55 MiB/s [2024-12-09T23:11:06.430Z] 10961.83 IOPS, 42.82 MiB/s [2024-12-09T23:11:06.430Z] 10996.29 IOPS, 42.95 MiB/s [2024-12-09T23:11:06.430Z] 11024.88 IOPS, 43.07 MiB/s [2024-12-09T23:11:06.430Z] 11048.67 IOPS, 43.16 MiB/s [2024-12-09T23:11:06.430Z] [2024-12-10 00:10:59.701578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.494 [2024-12-10 00:10:59.701615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.494 [2024-12-10 00:10:59.701638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.701987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.701995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.702002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.702010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.702016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.702024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.702031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.702039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.702046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.702054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.702061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.702069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.702076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.702084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.702092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.702100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.494 [2024-12-10 00:10:59.702107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.494 [2024-12-10 00:10:59.702115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.495 [2024-12-10 00:10:59.702341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.495 [2024-12-10 00:10:59.702357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.495 [2024-12-10 00:10:59.702372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.495 [2024-12-10 00:10:59.702387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.495 [2024-12-10 00:10:59.702402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.495 [2024-12-10 00:10:59.702418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.495 [2024-12-10 00:10:59.702433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.495 [2024-12-10 00:10:59.702694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.495 [2024-12-10 00:10:59.702721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50032 len:8 PRP1 0x0 PRP2 0x0 00:29:31.495 [2024-12-10 00:10:59.702729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.495 [2024-12-10 00:10:59.702775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.495 [2024-12-10 00:10:59.702783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.495 [2024-12-10 00:10:59.702790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.702797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.496 [2024-12-10 00:10:59.702804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.702812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:31.496 [2024-12-10 00:10:59.702819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.702826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95afa0 is same with the state(6) to be set 00:29:31.496 [2024-12-10 00:10:59.702950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.702957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.702964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50040 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.702971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.702981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.702987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.702992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50048 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.702999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50056 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50064 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50072 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50080 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50088 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50096 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50104 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50112 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50120 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50128 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50136 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50144 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50152 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50160 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50168 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50176 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50184 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50192 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50200 len:8 PRP1 0x0 PRP2 0x0 00:29:31.496 [2024-12-10 00:10:59.703482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.496 [2024-12-10 00:10:59.703488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.496 [2024-12-10 00:10:59.703494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.496 [2024-12-10 00:10:59.703499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50208 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50216 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50224 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50232 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50240 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49360 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49368 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49376 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49384 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49392 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49400 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49408 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50248 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50256 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50264 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50272 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50280 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50288 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50296 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.703959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.703964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50304 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.703971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.703977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.714441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.714454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49416 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.714461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.714470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.714476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.714482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49424 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.714488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.714496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.714502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.714510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49432 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.714517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.714524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.714529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.714534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49440 len:8 PRP1 0x0 PRP2 0x0 00:29:31.497 [2024-12-10 00:10:59.714541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.497 [2024-12-10 00:10:59.714548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.497 [2024-12-10 00:10:59.714553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.497 [2024-12-10 00:10:59.714559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49448 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49456 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49464 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49472 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49480 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49488 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49496 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49504 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49512 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49520 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49528 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49536 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49544 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49288 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49296 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49552 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49560 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.714980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.714986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.714991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49568 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.714997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.715004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.715011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.715016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49576 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.715023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.715030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.715035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.715043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49584 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.715051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.715058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.498 [2024-12-10 00:10:59.715064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.498 [2024-12-10 00:10:59.715069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49592 len:8 PRP1 0x0 PRP2 0x0 00:29:31.498 [2024-12-10 00:10:59.715075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.498 [2024-12-10 00:10:59.715083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49600 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49608 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49616 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49624 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49632 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49640 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49648 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49656 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49664 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49672 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49680 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49688 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49696 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49704 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49712 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49720 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49728 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49736 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49744 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49752 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49760 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49768 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.499 [2024-12-10 00:10:59.715665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49776 len:8 PRP1 0x0 PRP2 0x0 00:29:31.499 [2024-12-10 00:10:59.715671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.499 [2024-12-10 00:10:59.715678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.499 [2024-12-10 00:10:59.715684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49784 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49792 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49800 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49808 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49816 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49824 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49832 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49840 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49848 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49856 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49864 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49872 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.715979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.715984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.715991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49880 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.715997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.716005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.716010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.716016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49888 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.716022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.716028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.716033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.716039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49304 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.716046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.716053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.716057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.716063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49312 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.723079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.723095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.723103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.723111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49320 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.723120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.723129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.723136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.723146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49328 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.723155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.723170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.723177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.723184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49336 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.723194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.723204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.723211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.723219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49344 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.723228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.723237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.723246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.723255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49352 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.723264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.723275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.723282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.723290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49896 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.723300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.723309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.723316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.723324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49904 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.723333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.723343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.500 [2024-12-10 00:10:59.723350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.500 [2024-12-10 00:10:59.723357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49912 len:8 PRP1 0x0 PRP2 0x0 00:29:31.500 [2024-12-10 00:10:59.723367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.500 [2024-12-10 00:10:59.723378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49920 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49928 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49936 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49944 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49952 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49960 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49968 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49976 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49984 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49992 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50000 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50008 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50016 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50024 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:31.501 [2024-12-10 00:10:59.723862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:31.501 [2024-12-10 00:10:59.723871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50032 len:8 PRP1 0x0 PRP2 0x0 00:29:31.501 [2024-12-10 00:10:59.723880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.501 [2024-12-10 00:10:59.723930] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:31.501 [2024-12-10 00:10:59.723949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:29:31.501 [2024-12-10 00:10:59.723987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95afa0 (9): Bad file descriptor 00:29:31.501 [2024-12-10 00:10:59.727900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:29:31.501 [2024-12-10 00:10:59.757781] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:29:31.501 11006.30 IOPS, 42.99 MiB/s [2024-12-09T23:11:06.437Z] 11015.36 IOPS, 43.03 MiB/s [2024-12-09T23:11:06.437Z] 11057.42 IOPS, 43.19 MiB/s [2024-12-09T23:11:06.437Z] 11078.69 IOPS, 43.28 MiB/s [2024-12-09T23:11:06.437Z] 11080.57 IOPS, 43.28 MiB/s 00:29:31.501 Latency(us) 00:29:31.501 [2024-12-09T23:11:06.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.501 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:31.501 Verification LBA range: start 0x0 length 0x4000 00:29:31.501 NVMe0n1 : 15.00 11071.85 43.25 386.87 0.00 11147.83 427.41 30545.47 00:29:31.501 [2024-12-09T23:11:06.437Z] =================================================================================================================== 00:29:31.501 [2024-12-09T23:11:06.437Z] Total : 11071.85 43.25 386.87 0.00 11147.83 427.41 30545.47 00:29:31.501 Received shutdown signal, test time was about 15.000000 seconds 00:29:31.501 00:29:31.501 Latency(us) 00:29:31.501 [2024-12-09T23:11:06.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.501 [2024-12-09T23:11:06.437Z] =================================================================================================================== 00:29:31.501 [2024-12-09T23:11:06.437Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.501 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:31.501 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:29:31.501 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:29:31.501 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=469881 00:29:31.501 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:31.501 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 469881 /var/tmp/bdevperf.sock 00:29:31.501 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 469881 ']' 00:29:31.501 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:31.501 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.501 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:31.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:31.502 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.502 00:11:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:31.502 00:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.502 00:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:29:31.502 00:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:31.502 [2024-12-10 00:11:06.360573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:31.502 00:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:31.759 [2024-12-10 00:11:06.557076] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:31.760 00:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:32.325 NVMe0n1 00:29:32.326 00:11:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:32.326 00:29:32.326 00:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:29:32.897 00:29:32.897 00:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:32.897 00:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:29:33.156 00:11:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:33.156 00:11:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:29:36.438 00:11:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:36.438 00:11:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:29:36.438 00:11:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:36.439 00:11:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=470754 00:29:36.439 00:11:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 470754 00:29:37.814 { 00:29:37.814 "results": [ 00:29:37.814 { 00:29:37.814 "job": "NVMe0n1", 00:29:37.814 "core_mask": "0x1", 00:29:37.814 "workload": "verify", 00:29:37.814 "status": "finished", 00:29:37.814 "verify_range": { 00:29:37.814 "start": 0, 00:29:37.814 "length": 16384 00:29:37.814 }, 00:29:37.814 "queue_depth": 128, 00:29:37.814 "io_size": 4096, 00:29:37.814 "runtime": 1.00654, 00:29:37.814 "iops": 11362.688020346932, 00:29:37.814 "mibps": 44.3855000794802, 00:29:37.814 "io_failed": 0, 00:29:37.814 "io_timeout": 0, 00:29:37.814 "avg_latency_us": 11220.83154734253, 00:29:37.814 "min_latency_us": 769.335652173913, 00:29:37.815 "max_latency_us": 10143.83304347826 00:29:37.815 } 00:29:37.815 ], 00:29:37.815 "core_count": 1 00:29:37.815 } 00:29:37.815 00:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:29:37.815 [2024-12-10 00:11:05.969850] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:37.815 [2024-12-10 00:11:05.969906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469881 ] 00:29:37.815 [2024-12-10 00:11:06.046334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.815 [2024-12-10 00:11:06.083779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.815 [2024-12-10 00:11:08.040269] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:37.815 [2024-12-10 00:11:08.040314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.815 [2024-12-10 00:11:08.040326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.815 [2024-12-10 00:11:08.040336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.815 [2024-12-10 00:11:08.040343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.815 [2024-12-10 00:11:08.040350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.815 [2024-12-10 00:11:08.040357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.815 [2024-12-10 00:11:08.040364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.815 [2024-12-10 00:11:08.040371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.815 [2024-12-10 00:11:08.040378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:29:37.815 [2024-12-10 00:11:08.040403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:29:37.815 [2024-12-10 00:11:08.040418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2480fa0 (9): Bad file descriptor 00:29:37.815 [2024-12-10 00:11:08.060978] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:29:37.815 Running I/O for 1 seconds... 00:29:37.815 11309.00 IOPS, 44.18 MiB/s 00:29:37.815 Latency(us) 00:29:37.815 [2024-12-09T23:11:12.751Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.815 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:37.815 Verification LBA range: start 0x0 length 0x4000 00:29:37.815 NVMe0n1 : 1.01 11362.69 44.39 0.00 0.00 11220.83 769.34 10143.83 00:29:37.815 [2024-12-09T23:11:12.751Z] =================================================================================================================== 00:29:37.815 [2024-12-09T23:11:12.751Z] Total : 11362.69 44.39 0.00 0.00 11220.83 769.34 10143.83 00:29:37.815 00:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:37.815 00:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:29:37.815 00:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:38.075 00:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:38.075 00:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:29:38.075 00:11:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:38.334 00:11:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 469881 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 469881 ']' 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 469881 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 469881 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 469881' 00:29:41.616 killing process with pid 469881 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 469881 00:29:41.616 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 469881 00:29:41.874 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:29:41.874 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.133 rmmod nvme_tcp 00:29:42.133 rmmod nvme_fabrics 00:29:42.133 rmmod nvme_keyring 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 467011 ']' 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 467011 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 467011 ']' 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 467011 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 467011 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 467011' 00:29:42.133 killing process with pid 467011 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 467011 00:29:42.133 00:11:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 467011 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.393 00:11:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.299 00:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:44.299 00:29:44.299 real 0m37.335s 00:29:44.299 user 1m58.232s 00:29:44.299 sys 0m7.906s 00:29:44.299 00:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.299 00:11:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:44.299 ************************************ 00:29:44.299 END TEST nvmf_failover 00:29:44.299 ************************************ 00:29:44.299 00:11:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:44.299 00:11:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:44.299 00:11:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.299 00:11:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:44.558 ************************************ 00:29:44.558 START TEST nvmf_host_discovery 00:29:44.558 ************************************ 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:44.558 * Looking for test storage... 00:29:44.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:44.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.558 --rc genhtml_branch_coverage=1 00:29:44.558 --rc genhtml_function_coverage=1 00:29:44.558 --rc genhtml_legend=1 00:29:44.558 --rc geninfo_all_blocks=1 00:29:44.558 --rc geninfo_unexecuted_blocks=1 00:29:44.558 00:29:44.558 ' 00:29:44.558 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:44.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.558 --rc genhtml_branch_coverage=1 00:29:44.558 --rc genhtml_function_coverage=1 00:29:44.559 --rc genhtml_legend=1 00:29:44.559 --rc geninfo_all_blocks=1 00:29:44.559 --rc geninfo_unexecuted_blocks=1 00:29:44.559 00:29:44.559 ' 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:44.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.559 --rc genhtml_branch_coverage=1 00:29:44.559 --rc genhtml_function_coverage=1 00:29:44.559 --rc genhtml_legend=1 00:29:44.559 --rc geninfo_all_blocks=1 00:29:44.559 --rc geninfo_unexecuted_blocks=1 00:29:44.559 00:29:44.559 ' 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:44.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:44.559 --rc genhtml_branch_coverage=1 00:29:44.559 --rc genhtml_function_coverage=1 00:29:44.559 --rc genhtml_legend=1 00:29:44.559 --rc geninfo_all_blocks=1 00:29:44.559 --rc geninfo_unexecuted_blocks=1 00:29:44.559 00:29:44.559 ' 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:44.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:29:44.559 00:11:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:51.126 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:51.126 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:51.126 Found net devices under 0000:86:00.0: cvl_0_0 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:51.126 Found net devices under 0000:86:00.1: cvl_0_1 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:51.126 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:51.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:29:51.127 00:29:51.127 --- 10.0.0.2 ping statistics --- 00:29:51.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.127 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:29:51.127 00:29:51.127 --- 10.0.0.1 ping statistics --- 00:29:51.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.127 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=475207 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 475207 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 475207 ']' 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 [2024-12-10 00:11:25.409243] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:51.127 [2024-12-10 00:11:25.409293] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.127 [2024-12-10 00:11:25.489779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.127 [2024-12-10 00:11:25.529628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.127 [2024-12-10 00:11:25.529663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.127 [2024-12-10 00:11:25.529670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.127 [2024-12-10 00:11:25.529677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.127 [2024-12-10 00:11:25.529682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.127 [2024-12-10 00:11:25.530225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 [2024-12-10 00:11:25.664895] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 [2024-12-10 00:11:25.677077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 null0 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 null1 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=475226 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 475226 /tmp/host.sock 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 475226 ']' 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:51.127 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 [2024-12-10 00:11:25.752398] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:51.127 [2024-12-10 00:11:25.752439] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475226 ] 00:29:51.127 [2024-12-10 00:11:25.825667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.127 [2024-12-10 00:11:25.866802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:51.127 00:11:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.127 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:51.127 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:29:51.127 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:51.127 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:51.127 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.128 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:51.128 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.128 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:51.128 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.386 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.386 [2024-12-10 00:11:26.290655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.387 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.387 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:29:51.387 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:51.387 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:51.387 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.387 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:51.387 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.387 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:51.387 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:51.646 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.647 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:29:51.647 00:11:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:29:52.214 [2024-12-10 00:11:27.023707] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:52.214 [2024-12-10 00:11:27.023727] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:52.214 [2024-12-10 00:11:27.023739] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:52.472 [2024-12-10 00:11:27.150116] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:52.472 [2024-12-10 00:11:27.251901] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:52.472 [2024-12-10 00:11:27.252602] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16ea920:1 started. 00:29:52.472 [2024-12-10 00:11:27.254017] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:52.472 [2024-12-10 00:11:27.254033] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:52.472 [2024-12-10 00:11:27.261939] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16ea920 was disconnected and freed. delete nvme_qpair. 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.732 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:52.991 [2024-12-10 00:11:27.704648] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16eaca0:1 started. 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:52.991 [2024-12-10 00:11:27.712898] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16eaca0 was disconnected and freed. delete nvme_qpair. 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:52.991 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.992 [2024-12-10 00:11:27.802838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:52.992 [2024-12-10 00:11:27.803428] bdev_nvme.c:7492:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:52.992 [2024-12-10 00:11:27.803449] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:52.992 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.251 [2024-12-10 00:11:27.929826] bdev_nvme.c:7434:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:53.251 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:29:53.251 00:11:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:29:53.251 [2024-12-10 00:11:28.028613] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:29:53.251 [2024-12-10 00:11:28.028647] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:53.251 [2024-12-10 00:11:28.028655] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:53.251 [2024-12-10 00:11:28.028660] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:54.188 00:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:54.188 00:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:29:54.188 00:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:54.188 00:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:54.188 00:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:54.188 00:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.188 00:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:54.188 00:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.188 00:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:54.188 00:11:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.188 [2024-12-10 00:11:29.058477] bdev_nvme.c:7492:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:54.188 [2024-12-10 00:11:29.058499] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:29:54.188 [2024-12-10 00:11:29.066506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.188 [2024-12-10 00:11:29.066534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.188 [2024-12-10 00:11:29.066547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.188 [2024-12-10 00:11:29.066554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.188 [2024-12-10 00:11:29.066562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.188 [2024-12-10 00:11:29.066569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.188 [2024-12-10 00:11:29.066576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:54.188 [2024-12-10 00:11:29.066582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:54.188 [2024-12-10 00:11:29.066588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bc930 is same with the state(6) to be set 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:54.188 [2024-12-10 00:11:29.076519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bc930 (9): Bad file descriptor 00:29:54.188 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.188 [2024-12-10 00:11:29.086554] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:54.188 [2024-12-10 00:11:29.086565] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:54.188 [2024-12-10 00:11:29.086572] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:54.188 [2024-12-10 00:11:29.086577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:54.188 [2024-12-10 00:11:29.086595] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:54.188 [2024-12-10 00:11:29.086754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.188 [2024-12-10 00:11:29.086768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bc930 with addr=10.0.0.2, port=4420 00:29:54.188 [2024-12-10 00:11:29.086776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bc930 is same with the state(6) to be set 00:29:54.188 [2024-12-10 00:11:29.086787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bc930 (9): Bad file descriptor 00:29:54.188 [2024-12-10 00:11:29.086805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:54.188 [2024-12-10 00:11:29.086813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:54.188 [2024-12-10 00:11:29.086822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:54.188 [2024-12-10 00:11:29.086828] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:54.188 [2024-12-10 00:11:29.086833] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:54.188 [2024-12-10 00:11:29.086837] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:54.188 [2024-12-10 00:11:29.096626] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:54.188 [2024-12-10 00:11:29.096637] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:54.188 [2024-12-10 00:11:29.096642] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:54.188 [2024-12-10 00:11:29.096646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:54.188 [2024-12-10 00:11:29.096660] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:54.188 [2024-12-10 00:11:29.096903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.188 [2024-12-10 00:11:29.096918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bc930 with addr=10.0.0.2, port=4420 00:29:54.188 [2024-12-10 00:11:29.096926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bc930 is same with the state(6) to be set 00:29:54.188 [2024-12-10 00:11:29.096937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bc930 (9): Bad file descriptor 00:29:54.188 [2024-12-10 00:11:29.096954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:54.188 [2024-12-10 00:11:29.096961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:54.188 [2024-12-10 00:11:29.096968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:54.188 [2024-12-10 00:11:29.096975] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:54.188 [2024-12-10 00:11:29.096979] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:54.189 [2024-12-10 00:11:29.096983] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:54.189 [2024-12-10 00:11:29.106691] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:54.189 [2024-12-10 00:11:29.106705] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:54.189 [2024-12-10 00:11:29.106709] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:54.189 [2024-12-10 00:11:29.106713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:54.189 [2024-12-10 00:11:29.106729] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:54.189 [2024-12-10 00:11:29.106896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-12-10 00:11:29.106909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bc930 with addr=10.0.0.2, port=4420 00:29:54.189 [2024-12-10 00:11:29.106917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bc930 is same with the state(6) to be set 00:29:54.189 [2024-12-10 00:11:29.106929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bc930 (9): Bad file descriptor 00:29:54.189 [2024-12-10 00:11:29.106939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:54.189 [2024-12-10 00:11:29.106945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:54.189 [2024-12-10 00:11:29.106952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:54.189 [2024-12-10 00:11:29.106959] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:54.189 [2024-12-10 00:11:29.106963] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:54.189 [2024-12-10 00:11:29.106970] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:29:54.189 [2024-12-10 00:11:29.116760] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:54.189 [2024-12-10 00:11:29.116773] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:54.189 [2024-12-10 00:11:29.116777] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:54.189 [2024-12-10 00:11:29.116781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:54.189 [2024-12-10 00:11:29.116796] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:54.189 [2024-12-10 00:11:29.116977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.189 [2024-12-10 00:11:29.116989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bc930 with addr=10.0.0.2, port=4420 00:29:54.189 [2024-12-10 00:11:29.116997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bc930 is same with the state(6) to be set 00:29:54.189 [2024-12-10 00:11:29.117008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bc930 (9): Bad file descriptor 00:29:54.189 [2024-12-10 00:11:29.117019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:54.189 [2024-12-10 00:11:29.117028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:54.189 [2024-12-10 00:11:29.117036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:54.189 [2024-12-10 00:11:29.117042] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:54.189 [2024-12-10 00:11:29.117046] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:54.189 [2024-12-10 00:11:29.117050] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.189 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:54.448 [2024-12-10 00:11:29.126827] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:54.448 [2024-12-10 00:11:29.126842] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:54.448 [2024-12-10 00:11:29.126850] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:54.448 [2024-12-10 00:11:29.126854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:54.448 [2024-12-10 00:11:29.126871] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:54.448 [2024-12-10 00:11:29.127096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.448 [2024-12-10 00:11:29.127112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bc930 with addr=10.0.0.2, port=4420 00:29:54.448 [2024-12-10 00:11:29.127120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bc930 is same with the state(6) to be set 00:29:54.448 [2024-12-10 00:11:29.127131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bc930 (9): Bad file descriptor 00:29:54.448 [2024-12-10 00:11:29.127142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:54.448 [2024-12-10 00:11:29.127148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:54.448 [2024-12-10 00:11:29.127155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:54.448 [2024-12-10 00:11:29.127170] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:54.448 [2024-12-10 00:11:29.127175] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:54.448 [2024-12-10 00:11:29.127179] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:54.448 [2024-12-10 00:11:29.136902] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:54.448 [2024-12-10 00:11:29.136914] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:54.448 [2024-12-10 00:11:29.136918] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:54.448 [2024-12-10 00:11:29.136922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:54.448 [2024-12-10 00:11:29.136936] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:54.449 [2024-12-10 00:11:29.137212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.449 [2024-12-10 00:11:29.137228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bc930 with addr=10.0.0.2, port=4420 00:29:54.449 [2024-12-10 00:11:29.137237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bc930 is same with the state(6) to be set 00:29:54.449 [2024-12-10 00:11:29.137249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bc930 (9): Bad file descriptor 00:29:54.449 [2024-12-10 00:11:29.137259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:54.449 [2024-12-10 00:11:29.137266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:54.449 [2024-12-10 00:11:29.137273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:54.449 [2024-12-10 00:11:29.137279] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:54.449 [2024-12-10 00:11:29.137284] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:54.449 [2024-12-10 00:11:29.137288] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:54.449 [2024-12-10 00:11:29.145345] bdev_nvme.c:7297:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:54.449 [2024-12-10 00:11:29.145367] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:54.449 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.708 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:29:54.708 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:29:54.708 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:29:54.708 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:29:54.708 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:54.709 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.709 00:11:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.643 [2024-12-10 00:11:30.401388] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:55.643 [2024-12-10 00:11:30.401407] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:55.643 [2024-12-10 00:11:30.401418] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:55.643 [2024-12-10 00:11:30.488672] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:55.902 [2024-12-10 00:11:30.595410] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:29:55.903 [2024-12-10 00:11:30.596030] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x16f0970:1 started. 00:29:55.903 [2024-12-10 00:11:30.597684] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:55.903 [2024-12-10 00:11:30.597711] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:55.903 [2024-12-10 00:11:30.600618] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x16f0970 was disconnected and freed. delete nvme_qpair. 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.903 request: 00:29:55.903 { 00:29:55.903 "name": "nvme", 00:29:55.903 "trtype": "tcp", 00:29:55.903 "traddr": "10.0.0.2", 00:29:55.903 "adrfam": "ipv4", 00:29:55.903 "trsvcid": "8009", 00:29:55.903 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:55.903 "wait_for_attach": true, 00:29:55.903 "method": "bdev_nvme_start_discovery", 00:29:55.903 "req_id": 1 00:29:55.903 } 00:29:55.903 Got JSON-RPC error response 00:29:55.903 response: 00:29:55.903 { 00:29:55.903 "code": -17, 00:29:55.903 "message": "File exists" 00:29:55.903 } 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.903 request: 00:29:55.903 { 00:29:55.903 "name": "nvme_second", 00:29:55.903 "trtype": "tcp", 00:29:55.903 "traddr": "10.0.0.2", 00:29:55.903 "adrfam": "ipv4", 00:29:55.903 "trsvcid": "8009", 00:29:55.903 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:55.903 "wait_for_attach": true, 00:29:55.903 "method": "bdev_nvme_start_discovery", 00:29:55.903 "req_id": 1 00:29:55.903 } 00:29:55.903 Got JSON-RPC error response 00:29:55.903 response: 00:29:55.903 { 00:29:55.903 "code": -17, 00:29:55.903 "message": "File exists" 00:29:55.903 } 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.903 00:11:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:57.279 [2024-12-10 00:11:31.837004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-12-10 00:11:31.837031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d2cb0 with addr=10.0.0.2, port=8010 00:29:57.279 [2024-12-10 00:11:31.837044] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:57.279 [2024-12-10 00:11:31.837051] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:57.279 [2024-12-10 00:11:31.837058] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:58.215 [2024-12-10 00:11:32.839621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.215 [2024-12-10 00:11:32.839645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d2cb0 with addr=10.0.0.2, port=8010 00:29:58.215 [2024-12-10 00:11:32.839657] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:58.215 [2024-12-10 00:11:32.839663] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:58.215 [2024-12-10 00:11:32.839669] bdev_nvme.c:7578:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:59.150 [2024-12-10 00:11:33.841762] bdev_nvme.c:7553:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:59.150 request: 00:29:59.150 { 00:29:59.150 "name": "nvme_second", 00:29:59.150 "trtype": "tcp", 00:29:59.150 "traddr": "10.0.0.2", 00:29:59.150 "adrfam": "ipv4", 00:29:59.150 "trsvcid": "8010", 00:29:59.150 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:59.150 "wait_for_attach": false, 00:29:59.150 "attach_timeout_ms": 3000, 00:29:59.150 "method": "bdev_nvme_start_discovery", 00:29:59.150 "req_id": 1 00:29:59.150 } 00:29:59.150 Got JSON-RPC error response 00:29:59.150 response: 00:29:59.150 { 00:29:59.150 "code": -110, 00:29:59.150 "message": "Connection timed out" 00:29:59.150 } 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 475226 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.150 rmmod nvme_tcp 00:29:59.150 rmmod nvme_fabrics 00:29:59.150 rmmod nvme_keyring 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 475207 ']' 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 475207 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 475207 ']' 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 475207 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.150 00:11:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 475207 00:29:59.150 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:59.150 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:59.150 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 475207' 00:29:59.150 killing process with pid 475207 00:29:59.150 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 475207 00:29:59.150 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 475207 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.409 00:11:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.319 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.319 00:30:01.319 real 0m16.972s 00:30:01.319 user 0m20.197s 00:30:01.319 sys 0m5.715s 00:30:01.319 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.319 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:01.319 ************************************ 00:30:01.319 END TEST nvmf_host_discovery 00:30:01.319 ************************************ 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.579 ************************************ 00:30:01.579 START TEST nvmf_host_multipath_status 00:30:01.579 ************************************ 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:01.579 * Looking for test storage... 00:30:01.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:01.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.579 --rc genhtml_branch_coverage=1 00:30:01.579 --rc genhtml_function_coverage=1 00:30:01.579 --rc genhtml_legend=1 00:30:01.579 --rc geninfo_all_blocks=1 00:30:01.579 --rc geninfo_unexecuted_blocks=1 00:30:01.579 00:30:01.579 ' 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:01.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.579 --rc genhtml_branch_coverage=1 00:30:01.579 --rc genhtml_function_coverage=1 00:30:01.579 --rc genhtml_legend=1 00:30:01.579 --rc geninfo_all_blocks=1 00:30:01.579 --rc geninfo_unexecuted_blocks=1 00:30:01.579 00:30:01.579 ' 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:01.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.579 --rc genhtml_branch_coverage=1 00:30:01.579 --rc genhtml_function_coverage=1 00:30:01.579 --rc genhtml_legend=1 00:30:01.579 --rc geninfo_all_blocks=1 00:30:01.579 --rc geninfo_unexecuted_blocks=1 00:30:01.579 00:30:01.579 ' 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:01.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.579 --rc genhtml_branch_coverage=1 00:30:01.579 --rc genhtml_function_coverage=1 00:30:01.579 --rc genhtml_legend=1 00:30:01.579 --rc geninfo_all_blocks=1 00:30:01.579 --rc geninfo_unexecuted_blocks=1 00:30:01.579 00:30:01.579 ' 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:30:01.579 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:01.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/bpftrace.sh 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:01.839 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.840 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.840 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.840 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:01.840 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:01.840 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.840 00:11:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:08.411 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:08.411 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:08.411 Found net devices under 0000:86:00.0: cvl_0_0 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.411 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:08.412 Found net devices under 0000:86:00.1: cvl_0_1 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:30:08.412 00:30:08.412 --- 10.0.0.2 ping statistics --- 00:30:08.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.412 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:30:08.412 00:30:08.412 --- 10.0.0.1 ping statistics --- 00:30:08.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.412 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=480303 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 480303 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 480303 ']' 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:08.412 [2024-12-10 00:11:42.455608] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:30:08.412 [2024-12-10 00:11:42.455653] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.412 [2024-12-10 00:11:42.534925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:08.412 [2024-12-10 00:11:42.575352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.412 [2024-12-10 00:11:42.575388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.412 [2024-12-10 00:11:42.575395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.412 [2024-12-10 00:11:42.575401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.412 [2024-12-10 00:11:42.575406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.412 [2024-12-10 00:11:42.576542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.412 [2024-12-10 00:11:42.576544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=480303 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:08.412 [2024-12-10 00:11:42.874335] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.412 00:11:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:08.412 Malloc0 00:30:08.412 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:08.412 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:08.671 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.930 [2024-12-10 00:11:43.706835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.930 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:09.188 [2024-12-10 00:11:43.895321] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:09.188 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:09.188 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=480553 00:30:09.188 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:09.188 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 480553 /var/tmp/bdevperf.sock 00:30:09.188 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 480553 ']' 00:30:09.188 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:09.188 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.188 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:09.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:09.188 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.188 00:11:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:09.446 00:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.446 00:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:30:09.446 00:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:09.446 00:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:10.012 Nvme0n1 00:30:10.012 00:11:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:10.577 Nvme0n1 00:30:10.577 00:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:10.577 00:11:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:12.479 00:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:12.479 00:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:12.737 00:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:12.996 00:11:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:13.931 00:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:13.931 00:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:13.931 00:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:13.931 00:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:14.192 00:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.192 00:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:14.192 00:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.192 00:11:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:14.450 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:14.450 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:14.450 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.450 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:14.450 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.450 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:14.450 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.450 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:14.709 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.709 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:14.709 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.709 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:14.967 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:14.967 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:14.967 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:14.967 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:15.225 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:15.225 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:15.225 00:11:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:15.484 00:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:15.741 00:11:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:16.674 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:16.674 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:16.674 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.674 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:16.931 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:16.931 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:16.931 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.931 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:16.931 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:16.931 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:16.931 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:16.931 00:11:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:17.189 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.189 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:17.189 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.189 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:17.448 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.448 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:17.448 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.448 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:17.712 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.712 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:17.712 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:17.712 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:17.974 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:17.974 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:17.974 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:17.974 00:11:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:18.233 00:11:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.607 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:19.865 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:19.865 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:19.865 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:19.865 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:20.124 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.124 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:20.124 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.124 00:11:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:20.381 00:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.381 00:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:20.381 00:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:20.381 00:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:20.639 00:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:20.639 00:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:20.639 00:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:20.897 00:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:20.897 00:11:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:22.276 00:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:22.276 00:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:22.276 00:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.276 00:11:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:22.276 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.276 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:22.276 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:22.276 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.534 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:22.534 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:22.534 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.534 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:22.534 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.792 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:22.792 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:22.792 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:22.792 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:22.792 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:22.792 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:22.792 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.052 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:23.052 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:23.052 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:23.052 00:11:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:23.314 00:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:23.314 00:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:30:23.314 00:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:23.573 00:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:23.831 00:11:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:30:24.765 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:30:24.765 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:24.765 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:24.765 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:25.023 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:25.023 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:25.023 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.023 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:25.023 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:25.023 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:25.281 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.281 00:11:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:25.281 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.281 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:25.281 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.281 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:25.538 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:25.538 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:25.538 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.538 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:25.796 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:25.796 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:25.796 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:25.796 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:26.054 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:26.054 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:30:26.054 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:30:26.054 00:12:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:26.312 00:12:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:30:27.246 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:30:27.246 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:27.504 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.504 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:27.504 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:27.504 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:27.504 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.504 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:27.764 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:27.764 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:27.764 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:27.764 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:28.022 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.022 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:28.022 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.022 00:12:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:28.281 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.281 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:30:28.281 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.281 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:28.540 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:28.540 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:28.540 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:28.540 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:28.540 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:28.540 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:30:28.798 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:30:28.798 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:29.057 00:12:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:29.316 00:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:30:30.249 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:30:30.249 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:30.249 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.249 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:30.507 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.507 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:30.507 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.507 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:30.766 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:30.766 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:30.766 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:30.766 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:31.025 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.025 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:31.025 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.025 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:31.025 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.025 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:31.025 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.025 00:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:31.284 00:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.284 00:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:31.284 00:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:31.284 00:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:31.542 00:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:31.542 00:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:30:31.542 00:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:31.800 00:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:31.800 00:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:30:33.176 00:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:30:33.176 00:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:33.176 00:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.176 00:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:33.176 00:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:33.176 00:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:33.176 00:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.176 00:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:33.435 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.435 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:33.435 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.435 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:33.435 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.435 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:33.435 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.435 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:33.694 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.694 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:33.694 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.694 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:33.956 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:33.956 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:33.956 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:33.956 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:34.217 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:34.217 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:30:34.217 00:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:34.475 00:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:34.475 00:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:30:35.857 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:30:35.857 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:35.857 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.857 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:35.857 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:35.857 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:35.857 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:35.857 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:36.114 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.114 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:36.114 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.114 00:12:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:36.114 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.114 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:36.114 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:36.114 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.372 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.372 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:36.372 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.372 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:36.630 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.630 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:36.630 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:36.630 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:36.889 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:36.889 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:30:36.889 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:37.147 00:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:37.405 00:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:30:38.338 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:30:38.338 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:38.338 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.338 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:38.596 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.596 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:38.596 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.596 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:38.854 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:38.854 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:38.854 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.854 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:38.854 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:38.854 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:38.854 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:38.854 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:39.113 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.113 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:39.113 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.113 00:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:39.371 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:39.371 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:39.371 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:39.371 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 480553 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 480553 ']' 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 480553 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 480553 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 480553' 00:30:39.630 killing process with pid 480553 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 480553 00:30:39.630 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 480553 00:30:39.630 { 00:30:39.630 "results": [ 00:30:39.630 { 00:30:39.630 "job": "Nvme0n1", 00:30:39.630 "core_mask": "0x4", 00:30:39.630 "workload": "verify", 00:30:39.630 "status": "terminated", 00:30:39.630 "verify_range": { 00:30:39.630 "start": 0, 00:30:39.630 "length": 16384 00:30:39.630 }, 00:30:39.630 "queue_depth": 128, 00:30:39.630 "io_size": 4096, 00:30:39.630 "runtime": 29.058184, 00:30:39.630 "iops": 10430.14250305525, 00:30:39.630 "mibps": 40.74274415255957, 00:30:39.630 "io_failed": 0, 00:30:39.630 "io_timeout": 0, 00:30:39.630 "avg_latency_us": 12251.927897340687, 00:30:39.630 "min_latency_us": 498.6434782608696, 00:30:39.630 "max_latency_us": 3019898.88 00:30:39.630 } 00:30:39.630 ], 00:30:39.630 "core_count": 1 00:30:39.630 } 00:30:39.892 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 480553 00:30:39.892 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:30:39.892 [2024-12-10 00:11:43.960060] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:30:39.892 [2024-12-10 00:11:43.960114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480553 ] 00:30:39.892 [2024-12-10 00:11:44.034725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.892 [2024-12-10 00:11:44.075245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:39.892 Running I/O for 90 seconds... 00:30:39.892 11072.00 IOPS, 43.25 MiB/s [2024-12-09T23:12:14.828Z] 11085.50 IOPS, 43.30 MiB/s [2024-12-09T23:12:14.828Z] 11177.33 IOPS, 43.66 MiB/s [2024-12-09T23:12:14.828Z] 11190.50 IOPS, 43.71 MiB/s [2024-12-09T23:12:14.828Z] 11174.20 IOPS, 43.65 MiB/s [2024-12-09T23:12:14.828Z] 11180.33 IOPS, 43.67 MiB/s [2024-12-09T23:12:14.828Z] 11192.71 IOPS, 43.72 MiB/s [2024-12-09T23:12:14.828Z] 11199.62 IOPS, 43.75 MiB/s [2024-12-09T23:12:14.828Z] 11219.67 IOPS, 43.83 MiB/s [2024-12-09T23:12:14.828Z] 11235.30 IOPS, 43.89 MiB/s [2024-12-09T23:12:14.828Z] 11237.45 IOPS, 43.90 MiB/s [2024-12-09T23:12:14.828Z] 11241.17 IOPS, 43.91 MiB/s [2024-12-09T23:12:14.828Z] [2024-12-10 00:11:58.316009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.892 [2024-12-10 00:11:58.316046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:39.892 [2024-12-10 00:11:58.316079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.892 [2024-12-10 00:11:58.316088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:39.892 [2024-12-10 00:11:58.316101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.316987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.316994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:39.893 [2024-12-10 00:11:58.317378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:115792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.893 [2024-12-10 00:11:58.317385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.317724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.317733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.894 [2024-12-10 00:11:58.318878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:39.894 [2024-12-10 00:11:58.318893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.318900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.318915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.318923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.318940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.318947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.318962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.318970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.318985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.318993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.895 [2024-12-10 00:11:58.319494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:39.895 [2024-12-10 00:11:58.319883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.895 [2024-12-10 00:11:58.319892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:11:58.319909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:11:58.319916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:11:58.319933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:11:58.319940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:11:58.319958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:11:58.319964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:11:58.319981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:11:58.319989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:11:58.320008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:11:58.320015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:11:58.320033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:11:58.320039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:11:58.320056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:11:58.320063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:11:58.320082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:11:58.320090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:39.896 11190.92 IOPS, 43.71 MiB/s [2024-12-09T23:12:14.832Z] 10391.57 IOPS, 40.59 MiB/s [2024-12-09T23:12:14.832Z] 9698.80 IOPS, 37.89 MiB/s [2024-12-09T23:12:14.832Z] 9134.25 IOPS, 35.68 MiB/s [2024-12-09T23:12:14.832Z] 9256.41 IOPS, 36.16 MiB/s [2024-12-09T23:12:14.832Z] 9356.33 IOPS, 36.55 MiB/s [2024-12-09T23:12:14.832Z] 9500.89 IOPS, 37.11 MiB/s [2024-12-09T23:12:14.832Z] 9699.35 IOPS, 37.89 MiB/s [2024-12-09T23:12:14.832Z] 9875.67 IOPS, 38.58 MiB/s [2024-12-09T23:12:14.832Z] 9949.32 IOPS, 38.86 MiB/s [2024-12-09T23:12:14.832Z] 9998.22 IOPS, 39.06 MiB/s [2024-12-09T23:12:14.832Z] 10043.79 IOPS, 39.23 MiB/s [2024-12-09T23:12:14.832Z] 10164.28 IOPS, 39.70 MiB/s [2024-12-09T23:12:14.832Z] 10285.46 IOPS, 40.18 MiB/s [2024-12-09T23:12:14.832Z] [2024-12-10 00:12:12.073766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.073808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.073840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.073849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.073868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.073875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.073888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.073896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.073910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.073917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.073930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.073938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.073951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.073957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.073970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.073977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.073989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.073997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.074017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.074039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.074060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.074080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.074100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.074123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.074144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.896 [2024-12-10 00:12:12.074170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.074192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.074213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.074235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.074418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.074439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.074459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.074478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.074497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.074516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.074538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.074551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.074559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.075359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.896 [2024-12-10 00:12:12.075377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:39.896 [2024-12-10 00:12:12.075393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.897 [2024-12-10 00:12:12.075401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.897 [2024-12-10 00:12:12.075421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.897 [2024-12-10 00:12:12.075441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.897 [2024-12-10 00:12:12.075461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.897 [2024-12-10 00:12:12.075482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.897 [2024-12-10 00:12:12.075500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.897 [2024-12-10 00:12:12.075521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.075985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.075997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.076004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.076017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.076024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.076037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.076044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.076056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.897 [2024-12-10 00:12:12.076063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:39.897 [2024-12-10 00:12:12.076078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.898 [2024-12-10 00:12:12.076085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:39.898 10367.85 IOPS, 40.50 MiB/s [2024-12-09T23:12:14.834Z] 10404.64 IOPS, 40.64 MiB/s [2024-12-09T23:12:14.834Z] 10429.03 IOPS, 40.74 MiB/s [2024-12-09T23:12:14.834Z] Received shutdown signal, test time was about 29.058856 seconds 00:30:39.898 00:30:39.898 Latency(us) 00:30:39.898 [2024-12-09T23:12:14.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.898 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:39.898 Verification LBA range: start 0x0 length 0x4000 00:30:39.898 Nvme0n1 : 29.06 10430.14 40.74 0.00 0.00 12251.93 498.64 3019898.88 00:30:39.898 [2024-12-09T23:12:14.834Z] =================================================================================================================== 00:30:39.898 [2024-12-09T23:12:14.834Z] Total : 10430.14 40.74 0.00 0.00 12251.93 498.64 3019898.88 00:30:39.898 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:40.156 rmmod nvme_tcp 00:30:40.156 rmmod nvme_fabrics 00:30:40.156 rmmod nvme_keyring 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 480303 ']' 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 480303 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 480303 ']' 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 480303 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 480303 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 480303' 00:30:40.156 killing process with pid 480303 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 480303 00:30:40.156 00:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 480303 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.415 00:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.340 00:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:42.340 00:30:42.340 real 0m40.898s 00:30:42.340 user 1m51.370s 00:30:42.340 sys 0m11.442s 00:30:42.340 00:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.340 00:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:42.340 ************************************ 00:30:42.340 END TEST nvmf_host_multipath_status 00:30:42.340 ************************************ 00:30:42.340 00:12:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:42.340 00:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:42.340 00:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.340 00:12:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.603 ************************************ 00:30:42.603 START TEST nvmf_discovery_remove_ifc 00:30:42.603 ************************************ 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:30:42.603 * Looking for test storage... 00:30:42.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:30:42.603 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:42.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.604 --rc genhtml_branch_coverage=1 00:30:42.604 --rc genhtml_function_coverage=1 00:30:42.604 --rc genhtml_legend=1 00:30:42.604 --rc geninfo_all_blocks=1 00:30:42.604 --rc geninfo_unexecuted_blocks=1 00:30:42.604 00:30:42.604 ' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:42.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.604 --rc genhtml_branch_coverage=1 00:30:42.604 --rc genhtml_function_coverage=1 00:30:42.604 --rc genhtml_legend=1 00:30:42.604 --rc geninfo_all_blocks=1 00:30:42.604 --rc geninfo_unexecuted_blocks=1 00:30:42.604 00:30:42.604 ' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:42.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.604 --rc genhtml_branch_coverage=1 00:30:42.604 --rc genhtml_function_coverage=1 00:30:42.604 --rc genhtml_legend=1 00:30:42.604 --rc geninfo_all_blocks=1 00:30:42.604 --rc geninfo_unexecuted_blocks=1 00:30:42.604 00:30:42.604 ' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:42.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.604 --rc genhtml_branch_coverage=1 00:30:42.604 --rc genhtml_function_coverage=1 00:30:42.604 --rc genhtml_legend=1 00:30:42.604 --rc geninfo_all_blocks=1 00:30:42.604 --rc geninfo_unexecuted_blocks=1 00:30:42.604 00:30:42.604 ' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:42.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.604 00:12:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:49.185 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:49.186 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:49.186 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:49.186 Found net devices under 0000:86:00.0: cvl_0_0 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:49.186 Found net devices under 0000:86:00.1: cvl_0_1 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:49.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:30:49.186 00:30:49.186 --- 10.0.0.2 ping statistics --- 00:30:49.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.186 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:30:49.186 00:30:49.186 --- 10.0.0.1 ping statistics --- 00:30:49.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.186 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=489714 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 489714 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 489714 ']' 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.186 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.186 [2024-12-10 00:12:23.435271] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:30:49.186 [2024-12-10 00:12:23.435325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.187 [2024-12-10 00:12:23.514469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.187 [2024-12-10 00:12:23.555274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.187 [2024-12-10 00:12:23.555309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.187 [2024-12-10 00:12:23.555317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.187 [2024-12-10 00:12:23.555323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.187 [2024-12-10 00:12:23.555328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.187 [2024-12-10 00:12:23.555869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.187 [2024-12-10 00:12:23.700309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.187 [2024-12-10 00:12:23.708479] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:49.187 null0 00:30:49.187 [2024-12-10 00:12:23.740469] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=489849 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 489849 /tmp/host.sock 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 489849 ']' 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:49.187 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.187 [2024-12-10 00:12:23.806769] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:30:49.187 [2024-12-10 00:12:23.806809] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489849 ] 00:30:49.187 [2024-12-10 00:12:23.880681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.187 [2024-12-10 00:12:23.920843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.187 00:12:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:49.187 00:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.187 00:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:30:49.187 00:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.187 00:12:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:50.573 [2024-12-10 00:12:25.076713] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:50.573 [2024-12-10 00:12:25.076733] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:50.573 [2024-12-10 00:12:25.076744] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:50.573 [2024-12-10 00:12:25.203138] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:50.573 [2024-12-10 00:12:25.423270] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:30:50.573 [2024-12-10 00:12:25.423958] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x110d780:1 started. 00:30:50.573 [2024-12-10 00:12:25.425329] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:50.573 [2024-12-10 00:12:25.425366] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:50.573 [2024-12-10 00:12:25.425384] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:50.573 [2024-12-10 00:12:25.425396] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:50.573 [2024-12-10 00:12:25.425413] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.573 [2024-12-10 00:12:25.426740] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x110d780 was disconnected and freed. delete nvme_qpair. 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:30:50.573 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:30:50.832 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:30:50.832 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:50.832 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.832 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:50.832 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.832 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:50.832 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:50.832 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:50.832 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.832 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:50.832 00:12:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:51.775 00:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:51.775 00:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.775 00:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:51.775 00:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.775 00:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:51.775 00:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:51.775 00:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:51.775 00:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.775 00:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:51.775 00:12:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:53.150 00:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:53.150 00:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.150 00:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:53.150 00:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.150 00:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:53.150 00:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:53.150 00:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:53.150 00:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.150 00:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:53.150 00:12:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:54.086 00:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:54.086 00:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.086 00:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:54.086 00:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.086 00:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:54.086 00:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:54.086 00:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:54.086 00:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.086 00:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:54.086 00:12:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:55.027 00:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:55.027 00:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:55.027 00:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:55.027 00:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.027 00:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:55.027 00:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.027 00:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:55.027 00:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.027 00:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:55.027 00:12:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:55.961 00:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:55.961 00:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:55.961 00:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:55.961 00:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.961 00:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:55.961 00:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:55.961 00:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:55.961 00:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.961 [2024-12-10 00:12:30.866941] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:30:55.961 [2024-12-10 00:12:30.866985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.961 [2024-12-10 00:12:30.866996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.961 [2024-12-10 00:12:30.867006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.961 [2024-12-10 00:12:30.867013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.961 [2024-12-10 00:12:30.867021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.961 [2024-12-10 00:12:30.867032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.961 [2024-12-10 00:12:30.867039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.961 [2024-12-10 00:12:30.867046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.961 [2024-12-10 00:12:30.867053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.961 [2024-12-10 00:12:30.867059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.961 [2024-12-10 00:12:30.867066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e9fa0 is same with the state(6) to be set 00:30:55.961 [2024-12-10 00:12:30.876960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e9fa0 (9): Bad file descriptor 00:30:55.961 00:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:55.961 00:12:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:55.961 [2024-12-10 00:12:30.886996] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:55.961 [2024-12-10 00:12:30.887009] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:55.961 [2024-12-10 00:12:30.887015] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:55.961 [2024-12-10 00:12:30.887020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:55.961 [2024-12-10 00:12:30.887046] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:57.340 00:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:57.340 00:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.340 00:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:57.340 00:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.340 00:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:57.340 00:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:57.340 00:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:57.340 [2024-12-10 00:12:31.930190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:57.340 [2024-12-10 00:12:31.930262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e9fa0 with addr=10.0.0.2, port=4420 00:30:57.340 [2024-12-10 00:12:31.930294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e9fa0 is same with the state(6) to be set 00:30:57.340 [2024-12-10 00:12:31.930347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e9fa0 (9): Bad file descriptor 00:30:57.340 [2024-12-10 00:12:31.931296] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:30:57.340 [2024-12-10 00:12:31.931361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:57.340 [2024-12-10 00:12:31.931383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:57.340 [2024-12-10 00:12:31.931408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:57.340 [2024-12-10 00:12:31.931428] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:57.340 [2024-12-10 00:12:31.931452] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:57.340 [2024-12-10 00:12:31.931467] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:57.340 [2024-12-10 00:12:31.931488] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:57.340 [2024-12-10 00:12:31.931503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:57.340 00:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.340 00:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:57.340 00:12:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:58.277 [2024-12-10 00:12:32.934024] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:58.277 [2024-12-10 00:12:32.934043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:58.277 [2024-12-10 00:12:32.934055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:58.277 [2024-12-10 00:12:32.934061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:58.277 [2024-12-10 00:12:32.934068] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:30:58.277 [2024-12-10 00:12:32.934075] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:58.277 [2024-12-10 00:12:32.934079] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:58.277 [2024-12-10 00:12:32.934083] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:58.278 [2024-12-10 00:12:32.934103] bdev_nvme.c:7261:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:58.278 [2024-12-10 00:12:32.934122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.278 [2024-12-10 00:12:32.934130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.278 [2024-12-10 00:12:32.934139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.278 [2024-12-10 00:12:32.934146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.278 [2024-12-10 00:12:32.934153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.278 [2024-12-10 00:12:32.934164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.278 [2024-12-10 00:12:32.934172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.278 [2024-12-10 00:12:32.934178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.278 [2024-12-10 00:12:32.934186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.278 [2024-12-10 00:12:32.934193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.278 [2024-12-10 00:12:32.934200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:30:58.278 [2024-12-10 00:12:32.934553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d92b0 (9): Bad file descriptor 00:30:58.278 [2024-12-10 00:12:32.935564] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:58.278 [2024-12-10 00:12:32.935575] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:30:58.278 00:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:58.278 00:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:58.278 00:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:58.278 00:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.278 00:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:58.278 00:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:58.278 00:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:58.278 00:12:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:58.278 00:12:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:59.215 00:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:59.215 00:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.215 00:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:59.215 00:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.215 00:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:59.215 00:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:30:59.215 00:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:59.475 00:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.475 00:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:59.475 00:12:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:00.415 [2024-12-10 00:12:34.992661] bdev_nvme.c:7510:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:00.415 [2024-12-10 00:12:34.992682] bdev_nvme.c:7596:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:00.415 [2024-12-10 00:12:34.992694] bdev_nvme.c:7473:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:00.415 [2024-12-10 00:12:35.119080] bdev_nvme.c:7439:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:00.415 00:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:00.415 00:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:00.415 00:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:00.415 00:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.415 00:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:00.415 00:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:00.415 00:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:00.415 00:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.415 00:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:00.415 00:12:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:00.415 [2024-12-10 00:12:35.294977] bdev_nvme.c:5657:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:31:00.415 [2024-12-10 00:12:35.295612] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x10ea320:1 started. 00:31:00.415 [2024-12-10 00:12:35.296678] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:00.415 [2024-12-10 00:12:35.296707] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:00.415 [2024-12-10 00:12:35.296722] bdev_nvme.c:8306:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:00.415 [2024-12-10 00:12:35.296736] bdev_nvme.c:7329:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:00.415 [2024-12-10 00:12:35.296742] bdev_nvme.c:7288:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:00.415 [2024-12-10 00:12:35.301387] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x10ea320 was disconnected and freed. delete nvme_qpair. 00:31:01.351 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:01.351 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:01.351 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:01.351 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.351 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:01.351 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:01.351 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:01.351 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 489849 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 489849 ']' 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 489849 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 489849 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 489849' 00:31:01.610 killing process with pid 489849 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 489849 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 489849 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:01.610 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:01.610 rmmod nvme_tcp 00:31:01.610 rmmod nvme_fabrics 00:31:01.610 rmmod nvme_keyring 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 489714 ']' 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 489714 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 489714 ']' 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 489714 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 489714 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 489714' 00:31:01.870 killing process with pid 489714 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 489714 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 489714 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:01.870 00:12:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.406 00:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:04.406 00:31:04.406 real 0m21.567s 00:31:04.406 user 0m26.945s 00:31:04.406 sys 0m5.873s 00:31:04.406 00:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:04.406 00:12:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:04.406 ************************************ 00:31:04.406 END TEST nvmf_discovery_remove_ifc 00:31:04.406 ************************************ 00:31:04.406 00:12:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:04.406 00:12:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:04.406 00:12:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:04.406 00:12:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:04.406 ************************************ 00:31:04.406 START TEST nvmf_identify_kernel_target 00:31:04.406 ************************************ 00:31:04.406 00:12:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:04.406 * Looking for test storage... 00:31:04.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:04.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.406 --rc genhtml_branch_coverage=1 00:31:04.406 --rc genhtml_function_coverage=1 00:31:04.406 --rc genhtml_legend=1 00:31:04.406 --rc geninfo_all_blocks=1 00:31:04.406 --rc geninfo_unexecuted_blocks=1 00:31:04.406 00:31:04.406 ' 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:04.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.406 --rc genhtml_branch_coverage=1 00:31:04.406 --rc genhtml_function_coverage=1 00:31:04.406 --rc genhtml_legend=1 00:31:04.406 --rc geninfo_all_blocks=1 00:31:04.406 --rc geninfo_unexecuted_blocks=1 00:31:04.406 00:31:04.406 ' 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:04.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.406 --rc genhtml_branch_coverage=1 00:31:04.406 --rc genhtml_function_coverage=1 00:31:04.406 --rc genhtml_legend=1 00:31:04.406 --rc geninfo_all_blocks=1 00:31:04.406 --rc geninfo_unexecuted_blocks=1 00:31:04.406 00:31:04.406 ' 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:04.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:04.406 --rc genhtml_branch_coverage=1 00:31:04.406 --rc genhtml_function_coverage=1 00:31:04.406 --rc genhtml_legend=1 00:31:04.406 --rc geninfo_all_blocks=1 00:31:04.406 --rc geninfo_unexecuted_blocks=1 00:31:04.406 00:31:04.406 ' 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.406 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:04.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:04.407 00:12:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:10.979 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:10.979 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:10.980 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:10.980 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:10.980 Found net devices under 0000:86:00.0: cvl_0_0 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:10.980 Found net devices under 0000:86:00.1: cvl_0_1 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:10.980 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:10.981 00:12:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:10.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:10.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:31:10.981 00:31:10.981 --- 10.0.0.2 ping statistics --- 00:31:10.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.981 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:10.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:10.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:31:10.981 00:31:10.981 --- 10.0.0.1 ping statistics --- 00:31:10.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:10.981 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:10.981 00:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:31:12.884 Waiting for block devices as requested 00:31:13.144 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:13.144 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:13.403 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:13.403 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:13.403 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:13.403 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:13.662 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:13.662 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:13.662 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:13.921 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:13.921 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:13.921 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:14.180 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:14.180 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:14.180 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:14.180 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:14.440 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:31:14.440 No valid GPT data, bailing 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:31:14.440 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:14.441 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:31:14.441 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:14.441 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:31:14.441 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:31:14.441 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:31:14.441 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:14.441 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:31:14.702 00:31:14.702 Discovery Log Number of Records 2, Generation counter 2 00:31:14.702 =====Discovery Log Entry 0====== 00:31:14.702 trtype: tcp 00:31:14.702 adrfam: ipv4 00:31:14.702 subtype: current discovery subsystem 00:31:14.702 treq: not specified, sq flow control disable supported 00:31:14.702 portid: 1 00:31:14.702 trsvcid: 4420 00:31:14.702 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:14.702 traddr: 10.0.0.1 00:31:14.702 eflags: none 00:31:14.702 sectype: none 00:31:14.702 =====Discovery Log Entry 1====== 00:31:14.702 trtype: tcp 00:31:14.702 adrfam: ipv4 00:31:14.702 subtype: nvme subsystem 00:31:14.702 treq: not specified, sq flow control disable supported 00:31:14.702 portid: 1 00:31:14.702 trsvcid: 4420 00:31:14.702 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:14.702 traddr: 10.0.0.1 00:31:14.702 eflags: none 00:31:14.702 sectype: none 00:31:14.702 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:14.702 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:14.702 ===================================================== 00:31:14.702 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:14.702 ===================================================== 00:31:14.702 Controller Capabilities/Features 00:31:14.702 ================================ 00:31:14.702 Vendor ID: 0000 00:31:14.702 Subsystem Vendor ID: 0000 00:31:14.702 Serial Number: c0f081ec8b3c29a4fdd2 00:31:14.702 Model Number: Linux 00:31:14.702 Firmware Version: 6.8.9-20 00:31:14.702 Recommended Arb Burst: 0 00:31:14.702 IEEE OUI Identifier: 00 00 00 00:31:14.702 Multi-path I/O 00:31:14.702 May have multiple subsystem ports: No 00:31:14.702 May have multiple controllers: No 00:31:14.702 Associated with SR-IOV VF: No 00:31:14.702 Max Data Transfer Size: Unlimited 00:31:14.702 Max Number of Namespaces: 0 00:31:14.702 Max Number of I/O Queues: 1024 00:31:14.702 NVMe Specification Version (VS): 1.3 00:31:14.702 NVMe Specification Version (Identify): 1.3 00:31:14.702 Maximum Queue Entries: 1024 00:31:14.702 Contiguous Queues Required: No 00:31:14.702 Arbitration Mechanisms Supported 00:31:14.702 Weighted Round Robin: Not Supported 00:31:14.702 Vendor Specific: Not Supported 00:31:14.702 Reset Timeout: 7500 ms 00:31:14.702 Doorbell Stride: 4 bytes 00:31:14.702 NVM Subsystem Reset: Not Supported 00:31:14.702 Command Sets Supported 00:31:14.702 NVM Command Set: Supported 00:31:14.702 Boot Partition: Not Supported 00:31:14.702 Memory Page Size Minimum: 4096 bytes 00:31:14.702 Memory Page Size Maximum: 4096 bytes 00:31:14.702 Persistent Memory Region: Not Supported 00:31:14.702 Optional Asynchronous Events Supported 00:31:14.702 Namespace Attribute Notices: Not Supported 00:31:14.702 Firmware Activation Notices: Not Supported 00:31:14.702 ANA Change Notices: Not Supported 00:31:14.702 PLE Aggregate Log Change Notices: Not Supported 00:31:14.702 LBA Status Info Alert Notices: Not Supported 00:31:14.702 EGE Aggregate Log Change Notices: Not Supported 00:31:14.702 Normal NVM Subsystem Shutdown event: Not Supported 00:31:14.702 Zone Descriptor Change Notices: Not Supported 00:31:14.702 Discovery Log Change Notices: Supported 00:31:14.702 Controller Attributes 00:31:14.702 128-bit Host Identifier: Not Supported 00:31:14.702 Non-Operational Permissive Mode: Not Supported 00:31:14.702 NVM Sets: Not Supported 00:31:14.702 Read Recovery Levels: Not Supported 00:31:14.702 Endurance Groups: Not Supported 00:31:14.702 Predictable Latency Mode: Not Supported 00:31:14.702 Traffic Based Keep ALive: Not Supported 00:31:14.702 Namespace Granularity: Not Supported 00:31:14.702 SQ Associations: Not Supported 00:31:14.702 UUID List: Not Supported 00:31:14.702 Multi-Domain Subsystem: Not Supported 00:31:14.702 Fixed Capacity Management: Not Supported 00:31:14.702 Variable Capacity Management: Not Supported 00:31:14.702 Delete Endurance Group: Not Supported 00:31:14.702 Delete NVM Set: Not Supported 00:31:14.702 Extended LBA Formats Supported: Not Supported 00:31:14.702 Flexible Data Placement Supported: Not Supported 00:31:14.702 00:31:14.702 Controller Memory Buffer Support 00:31:14.702 ================================ 00:31:14.702 Supported: No 00:31:14.702 00:31:14.702 Persistent Memory Region Support 00:31:14.702 ================================ 00:31:14.702 Supported: No 00:31:14.702 00:31:14.702 Admin Command Set Attributes 00:31:14.702 ============================ 00:31:14.702 Security Send/Receive: Not Supported 00:31:14.702 Format NVM: Not Supported 00:31:14.702 Firmware Activate/Download: Not Supported 00:31:14.702 Namespace Management: Not Supported 00:31:14.702 Device Self-Test: Not Supported 00:31:14.702 Directives: Not Supported 00:31:14.702 NVMe-MI: Not Supported 00:31:14.702 Virtualization Management: Not Supported 00:31:14.702 Doorbell Buffer Config: Not Supported 00:31:14.702 Get LBA Status Capability: Not Supported 00:31:14.702 Command & Feature Lockdown Capability: Not Supported 00:31:14.702 Abort Command Limit: 1 00:31:14.702 Async Event Request Limit: 1 00:31:14.702 Number of Firmware Slots: N/A 00:31:14.703 Firmware Slot 1 Read-Only: N/A 00:31:14.703 Firmware Activation Without Reset: N/A 00:31:14.703 Multiple Update Detection Support: N/A 00:31:14.703 Firmware Update Granularity: No Information Provided 00:31:14.703 Per-Namespace SMART Log: No 00:31:14.703 Asymmetric Namespace Access Log Page: Not Supported 00:31:14.703 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:14.703 Command Effects Log Page: Not Supported 00:31:14.703 Get Log Page Extended Data: Supported 00:31:14.703 Telemetry Log Pages: Not Supported 00:31:14.703 Persistent Event Log Pages: Not Supported 00:31:14.703 Supported Log Pages Log Page: May Support 00:31:14.703 Commands Supported & Effects Log Page: Not Supported 00:31:14.703 Feature Identifiers & Effects Log Page:May Support 00:31:14.703 NVMe-MI Commands & Effects Log Page: May Support 00:31:14.703 Data Area 4 for Telemetry Log: Not Supported 00:31:14.703 Error Log Page Entries Supported: 1 00:31:14.703 Keep Alive: Not Supported 00:31:14.703 00:31:14.703 NVM Command Set Attributes 00:31:14.703 ========================== 00:31:14.703 Submission Queue Entry Size 00:31:14.703 Max: 1 00:31:14.703 Min: 1 00:31:14.703 Completion Queue Entry Size 00:31:14.703 Max: 1 00:31:14.703 Min: 1 00:31:14.703 Number of Namespaces: 0 00:31:14.703 Compare Command: Not Supported 00:31:14.703 Write Uncorrectable Command: Not Supported 00:31:14.703 Dataset Management Command: Not Supported 00:31:14.703 Write Zeroes Command: Not Supported 00:31:14.703 Set Features Save Field: Not Supported 00:31:14.703 Reservations: Not Supported 00:31:14.703 Timestamp: Not Supported 00:31:14.703 Copy: Not Supported 00:31:14.703 Volatile Write Cache: Not Present 00:31:14.703 Atomic Write Unit (Normal): 1 00:31:14.703 Atomic Write Unit (PFail): 1 00:31:14.703 Atomic Compare & Write Unit: 1 00:31:14.703 Fused Compare & Write: Not Supported 00:31:14.703 Scatter-Gather List 00:31:14.703 SGL Command Set: Supported 00:31:14.703 SGL Keyed: Not Supported 00:31:14.703 SGL Bit Bucket Descriptor: Not Supported 00:31:14.703 SGL Metadata Pointer: Not Supported 00:31:14.703 Oversized SGL: Not Supported 00:31:14.703 SGL Metadata Address: Not Supported 00:31:14.703 SGL Offset: Supported 00:31:14.703 Transport SGL Data Block: Not Supported 00:31:14.703 Replay Protected Memory Block: Not Supported 00:31:14.703 00:31:14.703 Firmware Slot Information 00:31:14.703 ========================= 00:31:14.703 Active slot: 0 00:31:14.703 00:31:14.703 00:31:14.703 Error Log 00:31:14.703 ========= 00:31:14.703 00:31:14.703 Active Namespaces 00:31:14.703 ================= 00:31:14.703 Discovery Log Page 00:31:14.703 ================== 00:31:14.703 Generation Counter: 2 00:31:14.703 Number of Records: 2 00:31:14.703 Record Format: 0 00:31:14.703 00:31:14.703 Discovery Log Entry 0 00:31:14.703 ---------------------- 00:31:14.703 Transport Type: 3 (TCP) 00:31:14.703 Address Family: 1 (IPv4) 00:31:14.703 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:14.703 Entry Flags: 00:31:14.703 Duplicate Returned Information: 0 00:31:14.703 Explicit Persistent Connection Support for Discovery: 0 00:31:14.703 Transport Requirements: 00:31:14.703 Secure Channel: Not Specified 00:31:14.703 Port ID: 1 (0x0001) 00:31:14.703 Controller ID: 65535 (0xffff) 00:31:14.703 Admin Max SQ Size: 32 00:31:14.703 Transport Service Identifier: 4420 00:31:14.703 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:14.703 Transport Address: 10.0.0.1 00:31:14.703 Discovery Log Entry 1 00:31:14.703 ---------------------- 00:31:14.703 Transport Type: 3 (TCP) 00:31:14.703 Address Family: 1 (IPv4) 00:31:14.703 Subsystem Type: 2 (NVM Subsystem) 00:31:14.703 Entry Flags: 00:31:14.703 Duplicate Returned Information: 0 00:31:14.703 Explicit Persistent Connection Support for Discovery: 0 00:31:14.703 Transport Requirements: 00:31:14.703 Secure Channel: Not Specified 00:31:14.703 Port ID: 1 (0x0001) 00:31:14.703 Controller ID: 65535 (0xffff) 00:31:14.703 Admin Max SQ Size: 32 00:31:14.703 Transport Service Identifier: 4420 00:31:14.703 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:14.703 Transport Address: 10.0.0.1 00:31:14.703 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:14.703 get_feature(0x01) failed 00:31:14.703 get_feature(0x02) failed 00:31:14.703 get_feature(0x04) failed 00:31:14.703 ===================================================== 00:31:14.703 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:14.703 ===================================================== 00:31:14.703 Controller Capabilities/Features 00:31:14.703 ================================ 00:31:14.703 Vendor ID: 0000 00:31:14.703 Subsystem Vendor ID: 0000 00:31:14.703 Serial Number: a093d150bd112e303bbe 00:31:14.703 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:14.703 Firmware Version: 6.8.9-20 00:31:14.703 Recommended Arb Burst: 6 00:31:14.703 IEEE OUI Identifier: 00 00 00 00:31:14.703 Multi-path I/O 00:31:14.703 May have multiple subsystem ports: Yes 00:31:14.703 May have multiple controllers: Yes 00:31:14.703 Associated with SR-IOV VF: No 00:31:14.703 Max Data Transfer Size: Unlimited 00:31:14.703 Max Number of Namespaces: 1024 00:31:14.703 Max Number of I/O Queues: 128 00:31:14.703 NVMe Specification Version (VS): 1.3 00:31:14.703 NVMe Specification Version (Identify): 1.3 00:31:14.703 Maximum Queue Entries: 1024 00:31:14.703 Contiguous Queues Required: No 00:31:14.703 Arbitration Mechanisms Supported 00:31:14.703 Weighted Round Robin: Not Supported 00:31:14.703 Vendor Specific: Not Supported 00:31:14.703 Reset Timeout: 7500 ms 00:31:14.703 Doorbell Stride: 4 bytes 00:31:14.703 NVM Subsystem Reset: Not Supported 00:31:14.703 Command Sets Supported 00:31:14.703 NVM Command Set: Supported 00:31:14.703 Boot Partition: Not Supported 00:31:14.703 Memory Page Size Minimum: 4096 bytes 00:31:14.703 Memory Page Size Maximum: 4096 bytes 00:31:14.703 Persistent Memory Region: Not Supported 00:31:14.703 Optional Asynchronous Events Supported 00:31:14.703 Namespace Attribute Notices: Supported 00:31:14.703 Firmware Activation Notices: Not Supported 00:31:14.703 ANA Change Notices: Supported 00:31:14.703 PLE Aggregate Log Change Notices: Not Supported 00:31:14.703 LBA Status Info Alert Notices: Not Supported 00:31:14.703 EGE Aggregate Log Change Notices: Not Supported 00:31:14.703 Normal NVM Subsystem Shutdown event: Not Supported 00:31:14.703 Zone Descriptor Change Notices: Not Supported 00:31:14.703 Discovery Log Change Notices: Not Supported 00:31:14.703 Controller Attributes 00:31:14.703 128-bit Host Identifier: Supported 00:31:14.703 Non-Operational Permissive Mode: Not Supported 00:31:14.703 NVM Sets: Not Supported 00:31:14.703 Read Recovery Levels: Not Supported 00:31:14.703 Endurance Groups: Not Supported 00:31:14.703 Predictable Latency Mode: Not Supported 00:31:14.703 Traffic Based Keep ALive: Supported 00:31:14.703 Namespace Granularity: Not Supported 00:31:14.703 SQ Associations: Not Supported 00:31:14.703 UUID List: Not Supported 00:31:14.703 Multi-Domain Subsystem: Not Supported 00:31:14.703 Fixed Capacity Management: Not Supported 00:31:14.703 Variable Capacity Management: Not Supported 00:31:14.703 Delete Endurance Group: Not Supported 00:31:14.703 Delete NVM Set: Not Supported 00:31:14.703 Extended LBA Formats Supported: Not Supported 00:31:14.703 Flexible Data Placement Supported: Not Supported 00:31:14.703 00:31:14.703 Controller Memory Buffer Support 00:31:14.703 ================================ 00:31:14.703 Supported: No 00:31:14.703 00:31:14.703 Persistent Memory Region Support 00:31:14.703 ================================ 00:31:14.703 Supported: No 00:31:14.703 00:31:14.703 Admin Command Set Attributes 00:31:14.703 ============================ 00:31:14.703 Security Send/Receive: Not Supported 00:31:14.703 Format NVM: Not Supported 00:31:14.703 Firmware Activate/Download: Not Supported 00:31:14.703 Namespace Management: Not Supported 00:31:14.703 Device Self-Test: Not Supported 00:31:14.703 Directives: Not Supported 00:31:14.703 NVMe-MI: Not Supported 00:31:14.703 Virtualization Management: Not Supported 00:31:14.703 Doorbell Buffer Config: Not Supported 00:31:14.703 Get LBA Status Capability: Not Supported 00:31:14.703 Command & Feature Lockdown Capability: Not Supported 00:31:14.703 Abort Command Limit: 4 00:31:14.703 Async Event Request Limit: 4 00:31:14.703 Number of Firmware Slots: N/A 00:31:14.703 Firmware Slot 1 Read-Only: N/A 00:31:14.703 Firmware Activation Without Reset: N/A 00:31:14.703 Multiple Update Detection Support: N/A 00:31:14.703 Firmware Update Granularity: No Information Provided 00:31:14.703 Per-Namespace SMART Log: Yes 00:31:14.703 Asymmetric Namespace Access Log Page: Supported 00:31:14.703 ANA Transition Time : 10 sec 00:31:14.703 00:31:14.703 Asymmetric Namespace Access Capabilities 00:31:14.704 ANA Optimized State : Supported 00:31:14.704 ANA Non-Optimized State : Supported 00:31:14.704 ANA Inaccessible State : Supported 00:31:14.704 ANA Persistent Loss State : Supported 00:31:14.704 ANA Change State : Supported 00:31:14.704 ANAGRPID is not changed : No 00:31:14.704 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:14.704 00:31:14.704 ANA Group Identifier Maximum : 128 00:31:14.704 Number of ANA Group Identifiers : 128 00:31:14.704 Max Number of Allowed Namespaces : 1024 00:31:14.704 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:14.704 Command Effects Log Page: Supported 00:31:14.704 Get Log Page Extended Data: Supported 00:31:14.704 Telemetry Log Pages: Not Supported 00:31:14.704 Persistent Event Log Pages: Not Supported 00:31:14.704 Supported Log Pages Log Page: May Support 00:31:14.704 Commands Supported & Effects Log Page: Not Supported 00:31:14.704 Feature Identifiers & Effects Log Page:May Support 00:31:14.704 NVMe-MI Commands & Effects Log Page: May Support 00:31:14.704 Data Area 4 for Telemetry Log: Not Supported 00:31:14.704 Error Log Page Entries Supported: 128 00:31:14.704 Keep Alive: Supported 00:31:14.704 Keep Alive Granularity: 1000 ms 00:31:14.704 00:31:14.704 NVM Command Set Attributes 00:31:14.704 ========================== 00:31:14.704 Submission Queue Entry Size 00:31:14.704 Max: 64 00:31:14.704 Min: 64 00:31:14.704 Completion Queue Entry Size 00:31:14.704 Max: 16 00:31:14.704 Min: 16 00:31:14.704 Number of Namespaces: 1024 00:31:14.704 Compare Command: Not Supported 00:31:14.704 Write Uncorrectable Command: Not Supported 00:31:14.704 Dataset Management Command: Supported 00:31:14.704 Write Zeroes Command: Supported 00:31:14.704 Set Features Save Field: Not Supported 00:31:14.704 Reservations: Not Supported 00:31:14.704 Timestamp: Not Supported 00:31:14.704 Copy: Not Supported 00:31:14.704 Volatile Write Cache: Present 00:31:14.704 Atomic Write Unit (Normal): 1 00:31:14.704 Atomic Write Unit (PFail): 1 00:31:14.704 Atomic Compare & Write Unit: 1 00:31:14.704 Fused Compare & Write: Not Supported 00:31:14.704 Scatter-Gather List 00:31:14.704 SGL Command Set: Supported 00:31:14.704 SGL Keyed: Not Supported 00:31:14.704 SGL Bit Bucket Descriptor: Not Supported 00:31:14.704 SGL Metadata Pointer: Not Supported 00:31:14.704 Oversized SGL: Not Supported 00:31:14.704 SGL Metadata Address: Not Supported 00:31:14.704 SGL Offset: Supported 00:31:14.704 Transport SGL Data Block: Not Supported 00:31:14.704 Replay Protected Memory Block: Not Supported 00:31:14.704 00:31:14.704 Firmware Slot Information 00:31:14.704 ========================= 00:31:14.704 Active slot: 0 00:31:14.704 00:31:14.704 Asymmetric Namespace Access 00:31:14.704 =========================== 00:31:14.704 Change Count : 0 00:31:14.704 Number of ANA Group Descriptors : 1 00:31:14.704 ANA Group Descriptor : 0 00:31:14.704 ANA Group ID : 1 00:31:14.704 Number of NSID Values : 1 00:31:14.704 Change Count : 0 00:31:14.704 ANA State : 1 00:31:14.704 Namespace Identifier : 1 00:31:14.704 00:31:14.704 Commands Supported and Effects 00:31:14.704 ============================== 00:31:14.704 Admin Commands 00:31:14.704 -------------- 00:31:14.704 Get Log Page (02h): Supported 00:31:14.704 Identify (06h): Supported 00:31:14.704 Abort (08h): Supported 00:31:14.704 Set Features (09h): Supported 00:31:14.704 Get Features (0Ah): Supported 00:31:14.704 Asynchronous Event Request (0Ch): Supported 00:31:14.704 Keep Alive (18h): Supported 00:31:14.704 I/O Commands 00:31:14.704 ------------ 00:31:14.704 Flush (00h): Supported 00:31:14.704 Write (01h): Supported LBA-Change 00:31:14.704 Read (02h): Supported 00:31:14.704 Write Zeroes (08h): Supported LBA-Change 00:31:14.704 Dataset Management (09h): Supported 00:31:14.704 00:31:14.704 Error Log 00:31:14.704 ========= 00:31:14.704 Entry: 0 00:31:14.704 Error Count: 0x3 00:31:14.704 Submission Queue Id: 0x0 00:31:14.704 Command Id: 0x5 00:31:14.704 Phase Bit: 0 00:31:14.704 Status Code: 0x2 00:31:14.704 Status Code Type: 0x0 00:31:14.704 Do Not Retry: 1 00:31:14.704 Error Location: 0x28 00:31:14.704 LBA: 0x0 00:31:14.704 Namespace: 0x0 00:31:14.704 Vendor Log Page: 0x0 00:31:14.704 ----------- 00:31:14.704 Entry: 1 00:31:14.704 Error Count: 0x2 00:31:14.704 Submission Queue Id: 0x0 00:31:14.704 Command Id: 0x5 00:31:14.704 Phase Bit: 0 00:31:14.704 Status Code: 0x2 00:31:14.704 Status Code Type: 0x0 00:31:14.704 Do Not Retry: 1 00:31:14.704 Error Location: 0x28 00:31:14.704 LBA: 0x0 00:31:14.704 Namespace: 0x0 00:31:14.704 Vendor Log Page: 0x0 00:31:14.704 ----------- 00:31:14.704 Entry: 2 00:31:14.704 Error Count: 0x1 00:31:14.704 Submission Queue Id: 0x0 00:31:14.704 Command Id: 0x4 00:31:14.704 Phase Bit: 0 00:31:14.704 Status Code: 0x2 00:31:14.704 Status Code Type: 0x0 00:31:14.704 Do Not Retry: 1 00:31:14.704 Error Location: 0x28 00:31:14.704 LBA: 0x0 00:31:14.704 Namespace: 0x0 00:31:14.704 Vendor Log Page: 0x0 00:31:14.704 00:31:14.704 Number of Queues 00:31:14.704 ================ 00:31:14.704 Number of I/O Submission Queues: 128 00:31:14.704 Number of I/O Completion Queues: 128 00:31:14.704 00:31:14.704 ZNS Specific Controller Data 00:31:14.704 ============================ 00:31:14.704 Zone Append Size Limit: 0 00:31:14.704 00:31:14.704 00:31:14.704 Active Namespaces 00:31:14.704 ================= 00:31:14.704 get_feature(0x05) failed 00:31:14.704 Namespace ID:1 00:31:14.704 Command Set Identifier: NVM (00h) 00:31:14.704 Deallocate: Supported 00:31:14.704 Deallocated/Unwritten Error: Not Supported 00:31:14.704 Deallocated Read Value: Unknown 00:31:14.704 Deallocate in Write Zeroes: Not Supported 00:31:14.704 Deallocated Guard Field: 0xFFFF 00:31:14.704 Flush: Supported 00:31:14.704 Reservation: Not Supported 00:31:14.704 Namespace Sharing Capabilities: Multiple Controllers 00:31:14.704 Size (in LBAs): 1953525168 (931GiB) 00:31:14.704 Capacity (in LBAs): 1953525168 (931GiB) 00:31:14.704 Utilization (in LBAs): 1953525168 (931GiB) 00:31:14.704 UUID: 11f962f5-4935-4991-a346-8dee070e0fd4 00:31:14.704 Thin Provisioning: Not Supported 00:31:14.704 Per-NS Atomic Units: Yes 00:31:14.704 Atomic Boundary Size (Normal): 0 00:31:14.704 Atomic Boundary Size (PFail): 0 00:31:14.704 Atomic Boundary Offset: 0 00:31:14.704 NGUID/EUI64 Never Reused: No 00:31:14.704 ANA group ID: 1 00:31:14.704 Namespace Write Protected: No 00:31:14.704 Number of LBA Formats: 1 00:31:14.704 Current LBA Format: LBA Format #00 00:31:14.704 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:14.704 00:31:14.704 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:14.704 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:14.704 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:31:14.704 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:14.704 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:31:14.704 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:14.704 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:14.704 rmmod nvme_tcp 00:31:14.704 rmmod nvme_fabrics 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.964 00:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.873 00:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:16.873 00:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:16.873 00:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:16.873 00:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:31:16.873 00:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:16.873 00:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:16.873 00:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:16.873 00:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:16.873 00:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:31:16.873 00:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:31:16.873 00:12:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:31:20.166 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:20.166 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:20.732 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:20.996 00:31:20.996 real 0m16.798s 00:31:20.996 user 0m4.363s 00:31:20.996 sys 0m8.694s 00:31:20.996 00:12:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:20.996 00:12:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:20.996 ************************************ 00:31:20.996 END TEST nvmf_identify_kernel_target 00:31:20.996 ************************************ 00:31:20.996 00:12:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:20.996 00:12:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:20.996 00:12:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:20.996 00:12:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.996 ************************************ 00:31:20.996 START TEST nvmf_auth_host 00:31:20.996 ************************************ 00:31:20.996 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:20.996 * Looking for test storage... 00:31:20.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:31:20.996 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:20.996 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:20.996 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:31:21.262 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:21.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.263 --rc genhtml_branch_coverage=1 00:31:21.263 --rc genhtml_function_coverage=1 00:31:21.263 --rc genhtml_legend=1 00:31:21.263 --rc geninfo_all_blocks=1 00:31:21.263 --rc geninfo_unexecuted_blocks=1 00:31:21.263 00:31:21.263 ' 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:21.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.263 --rc genhtml_branch_coverage=1 00:31:21.263 --rc genhtml_function_coverage=1 00:31:21.263 --rc genhtml_legend=1 00:31:21.263 --rc geninfo_all_blocks=1 00:31:21.263 --rc geninfo_unexecuted_blocks=1 00:31:21.263 00:31:21.263 ' 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:21.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.263 --rc genhtml_branch_coverage=1 00:31:21.263 --rc genhtml_function_coverage=1 00:31:21.263 --rc genhtml_legend=1 00:31:21.263 --rc geninfo_all_blocks=1 00:31:21.263 --rc geninfo_unexecuted_blocks=1 00:31:21.263 00:31:21.263 ' 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:21.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:21.263 --rc genhtml_branch_coverage=1 00:31:21.263 --rc genhtml_function_coverage=1 00:31:21.263 --rc genhtml_legend=1 00:31:21.263 --rc geninfo_all_blocks=1 00:31:21.263 --rc geninfo_unexecuted_blocks=1 00:31:21.263 00:31:21.263 ' 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:21.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.263 00:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.263 00:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:21.263 00:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:21.263 00:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:21.263 00:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:27.833 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:27.833 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:27.833 Found net devices under 0000:86:00.0: cvl_0_0 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:27.833 Found net devices under 0000:86:00.1: cvl_0_1 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:27.833 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:27.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:31:27.834 00:31:27.834 --- 10.0.0.2 ping statistics --- 00:31:27.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.834 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:31:27.834 00:31:27.834 --- 10.0.0.1 ping statistics --- 00:31:27.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.834 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=501831 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 501831 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 501831 ']' 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.834 00:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c1ceee141b6c4458581015e139f947da 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Mb9 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c1ceee141b6c4458581015e139f947da 0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c1ceee141b6c4458581015e139f947da 0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c1ceee141b6c4458581015e139f947da 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Mb9 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Mb9 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Mb9 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6447b72c1a529b11e8871d472a90cccf039b670642ddacbe5176a5af17033762 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Am4 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6447b72c1a529b11e8871d472a90cccf039b670642ddacbe5176a5af17033762 3 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6447b72c1a529b11e8871d472a90cccf039b670642ddacbe5176a5af17033762 3 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6447b72c1a529b11e8871d472a90cccf039b670642ddacbe5176a5af17033762 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Am4 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Am4 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Am4 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cd934cb7c076bc4a2891cf7ab8b07246ec5940ae3cf7de04 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7J0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cd934cb7c076bc4a2891cf7ab8b07246ec5940ae3cf7de04 0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cd934cb7c076bc4a2891cf7ab8b07246ec5940ae3cf7de04 0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cd934cb7c076bc4a2891cf7ab8b07246ec5940ae3cf7de04 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7J0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7J0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.7J0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5f4c261e52180e30e93a84cc4726fe28454fe9abe5f5c28f 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.52p 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5f4c261e52180e30e93a84cc4726fe28454fe9abe5f5c28f 2 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5f4c261e52180e30e93a84cc4726fe28454fe9abe5f5c28f 2 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5f4c261e52180e30e93a84cc4726fe28454fe9abe5f5c28f 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.52p 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.52p 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.52p 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c621e1b5599b72194e5694b018d574e2 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Xj0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c621e1b5599b72194e5694b018d574e2 1 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c621e1b5599b72194e5694b018d574e2 1 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c621e1b5599b72194e5694b018d574e2 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Xj0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Xj0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Xj0 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=76cfe62071a4d1f616f5a0e9ef03abe3 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:31:27.834 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1Xz 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 76cfe62071a4d1f616f5a0e9ef03abe3 1 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 76cfe62071a4d1f616f5a0e9ef03abe3 1 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=76cfe62071a4d1f616f5a0e9ef03abe3 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1Xz 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1Xz 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.1Xz 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=71936d8c694faefc84cf8bfb063678bed5edee8e44d5dbae 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.znm 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 71936d8c694faefc84cf8bfb063678bed5edee8e44d5dbae 2 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 71936d8c694faefc84cf8bfb063678bed5edee8e44d5dbae 2 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=71936d8c694faefc84cf8bfb063678bed5edee8e44d5dbae 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.znm 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.znm 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.znm 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1a57aafdca744c722fd175863121f996 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.VT5 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1a57aafdca744c722fd175863121f996 0 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1a57aafdca744c722fd175863121f996 0 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1a57aafdca744c722fd175863121f996 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.VT5 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.VT5 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.VT5 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ea60af12f4d76455e5073b425b1b0793aa455683daa250aea694955b98e53a5b 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.N21 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ea60af12f4d76455e5073b425b1b0793aa455683daa250aea694955b98e53a5b 3 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ea60af12f4d76455e5073b425b1b0793aa455683daa250aea694955b98e53a5b 3 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ea60af12f4d76455e5073b425b1b0793aa455683daa250aea694955b98e53a5b 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.N21 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.N21 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.N21 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 501831 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 501831 ']' 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.835 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Mb9 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Am4 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Am4 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.7J0 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.52p ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.52p 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Xj0 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.1Xz ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Xz 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.znm 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.VT5 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.VT5 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.N21 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:28.094 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:28.095 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:28.095 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:28.095 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:28.095 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:28.095 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:28.095 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:28.095 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:31:28.095 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:28.095 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:28.095 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:28.095 00:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:31:30.633 Waiting for block devices as requested 00:31:30.892 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:30.892 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:31.150 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:31.150 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:31.150 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:31.150 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:31.408 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:31.408 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:31.408 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:31.408 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:31.667 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:31.667 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:31.667 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:31.927 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:31.927 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:31.927 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:31.927 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:32.503 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:32.503 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:32.503 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:32.503 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:32.503 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:32.503 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:32.503 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:32.503 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:32.503 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:31:32.763 No valid GPT data, bailing 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:31:32.763 00:31:32.763 Discovery Log Number of Records 2, Generation counter 2 00:31:32.763 =====Discovery Log Entry 0====== 00:31:32.763 trtype: tcp 00:31:32.763 adrfam: ipv4 00:31:32.763 subtype: current discovery subsystem 00:31:32.763 treq: not specified, sq flow control disable supported 00:31:32.763 portid: 1 00:31:32.763 trsvcid: 4420 00:31:32.763 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:32.763 traddr: 10.0.0.1 00:31:32.763 eflags: none 00:31:32.763 sectype: none 00:31:32.763 =====Discovery Log Entry 1====== 00:31:32.763 trtype: tcp 00:31:32.763 adrfam: ipv4 00:31:32.763 subtype: nvme subsystem 00:31:32.763 treq: not specified, sq flow control disable supported 00:31:32.763 portid: 1 00:31:32.763 trsvcid: 4420 00:31:32.763 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:32.763 traddr: 10.0.0.1 00:31:32.763 eflags: none 00:31:32.763 sectype: none 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.763 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.022 nvme0n1 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.022 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.023 00:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.281 nvme0n1 00:31:33.281 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.281 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.281 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.281 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.281 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.281 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.282 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.541 nvme0n1 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.541 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 nvme0n1 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.800 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.059 nvme0n1 00:31:34.059 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.059 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.059 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.059 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.059 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.059 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.059 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.060 nvme0n1 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.060 00:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:34.319 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.578 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.579 nvme0n1 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.579 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.839 nvme0n1 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:34.839 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.100 nvme0n1 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.100 00:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:35.100 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:35.101 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:35.101 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.101 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.360 nvme0n1 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.360 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.619 nvme0n1 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:35.619 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.187 00:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.446 nvme0n1 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.446 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.705 nvme0n1 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:36.705 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:36.964 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:36.965 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:36.965 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:36.965 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:36.965 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.965 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 nvme0n1 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.224 00:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.484 nvme0n1 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.484 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.744 nvme0n1 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:37.744 00:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.126 00:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.702 nvme0n1 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.702 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.961 nvme0n1 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.961 00:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.529 nvme0n1 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:40.529 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:40.530 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:40.530 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:40.530 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:40.530 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:40.530 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:40.530 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.530 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.788 nvme0n1 00:31:40.788 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.788 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:40.788 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.788 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:40.788 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.788 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.788 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:40.788 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:40.788 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.788 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.048 00:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.306 nvme0n1 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.306 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:41.873 nvme0n1 00:31:41.873 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.873 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:41.873 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:41.873 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.873 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.132 00:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.699 nvme0n1 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:42.699 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:42.700 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:42.700 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.700 00:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.267 nvme0n1 00:31:43.267 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.267 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.267 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.267 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.267 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.268 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.836 nvme0n1 00:31:43.836 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.836 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:43.836 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:43.836 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.836 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:43.836 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.836 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:43.836 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:43.836 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.836 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.094 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.095 00:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.665 nvme0n1 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:44.665 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:44.666 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.666 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.666 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:44.666 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.666 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:44.666 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:44.666 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:44.666 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:44.666 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.666 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.927 nvme0n1 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:44.927 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.928 nvme0n1 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:44.928 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.187 00:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.187 nvme0n1 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:31:45.187 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.451 nvme0n1 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.451 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.714 nvme0n1 00:31:45.714 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.714 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.714 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.714 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.714 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.714 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.715 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.974 nvme0n1 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.974 00:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.233 nvme0n1 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:46.233 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.234 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.234 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.234 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.234 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:46.234 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.234 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:46.234 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:46.234 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:46.234 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:46.234 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.234 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.493 nvme0n1 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.493 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.752 nvme0n1 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:46.752 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:46.753 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:46.753 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:46.753 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:46.753 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:46.753 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.753 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.012 nvme0n1 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.012 00:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.270 nvme0n1 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:47.270 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.529 nvme0n1 00:31:47.529 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.788 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.049 nvme0n1 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.049 00:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.309 nvme0n1 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.309 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.310 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.568 nvme0n1 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:48.569 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.828 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.088 nvme0n1 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.088 00:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.656 nvme0n1 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.656 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.914 nvme0n1 00:31:49.914 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.914 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:49.914 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:49.914 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.914 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:49.914 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.914 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:49.914 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:49.914 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.914 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.172 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.172 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.172 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:31:50.172 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.172 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.172 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:50.172 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:50.172 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:50.172 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:50.172 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.172 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.173 00:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.434 nvme0n1 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.434 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.003 nvme0n1 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.003 00:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.571 nvme0n1 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:51.571 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.572 00:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.140 nvme0n1 00:31:52.140 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.140 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.140 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.140 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.140 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.140 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.140 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.140 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.140 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.140 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.399 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.967 nvme0n1 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.967 00:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.535 nvme0n1 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.535 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.103 nvme0n1 00:31:54.103 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.103 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.103 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.103 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.103 00:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.103 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.362 nvme0n1 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.362 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.363 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.622 nvme0n1 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.622 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.623 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.882 nvme0n1 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:54.882 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:54.883 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:54.883 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:54.883 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:54.883 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:54.883 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:54.883 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:54.883 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:54.883 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:54.883 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.883 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.141 nvme0n1 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.141 00:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.399 nvme0n1 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:55.399 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.400 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.659 nvme0n1 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.659 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.918 nvme0n1 00:31:55.918 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.918 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.918 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.919 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.178 nvme0n1 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:56.178 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.179 00:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.438 nvme0n1 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:56.438 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:56.439 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.439 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.696 nvme0n1 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.696 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.697 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.956 nvme0n1 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.956 00:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.215 nvme0n1 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.215 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.216 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.216 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.216 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:57.216 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.216 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:57.216 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:57.216 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:57.473 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:57.473 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.473 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.473 nvme0n1 00:31:57.473 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.473 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.473 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.473 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.473 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.473 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:57.731 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:57.732 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:57.732 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:57.732 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.732 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.990 nvme0n1 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:57.990 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.991 00:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.258 nvme0n1 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.258 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.827 nvme0n1 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:58.827 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.828 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:58.828 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:58.828 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:58.828 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:58.828 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.828 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.087 nvme0n1 00:31:59.087 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.087 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.087 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.087 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.087 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.087 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.087 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.087 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.087 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.087 00:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:59.087 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.346 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.605 nvme0n1 00:31:59.605 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.605 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.605 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.605 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.605 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.605 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.606 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.178 nvme0n1 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:00.178 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.179 00:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.437 nvme0n1 00:32:00.437 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.437 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.437 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.437 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.437 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.437 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.437 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.437 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.437 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.437 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.437 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFjZWVlMTQxYjZjNDQ1ODU4MTAxNWUxMzlmOTQ3ZGGW3/EM: 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: ]] 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjQ0N2I3MmMxYTUyOWIxMWU4ODcxZDQ3MmE5MGNjY2YwMzliNjcwNjQyZGRhY2JlNTE3NmE1YWYxNzAzMzc2MnAsuhE=: 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.696 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.268 nvme0n1 00:32:01.268 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.268 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.268 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.268 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.268 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.268 00:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.268 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.836 nvme0n1 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.836 00:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.405 nvme0n1 00:32:02.405 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.405 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.405 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.405 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.405 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.405 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.405 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.405 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.405 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.405 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzE5MzZkOGM2OTRmYWVmYzg0Y2Y4YmZiMDYzNjc4YmVkNWVkZWU4ZTQ0ZDVkYmFlp9r1xw==: 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: ]] 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MWE1N2FhZmRjYTc0NGM3MjJmZDE3NTg2MzEyMWY5OTYWjSRu: 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.664 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.665 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.233 nvme0n1 00:32:03.233 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.233 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.233 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.233 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.233 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.233 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.233 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.233 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.233 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.233 00:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWE2MGFmMTJmNGQ3NjQ1NWU1MDczYjQyNWIxYjA3OTNhYTQ1NTY4M2RhYTI1MGFlYTY5NDk1NWI5OGU1M2E1YtCNuKc=: 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.233 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.800 nvme0n1 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.800 request: 00:32:03.800 { 00:32:03.800 "name": "nvme0", 00:32:03.800 "trtype": "tcp", 00:32:03.800 "traddr": "10.0.0.1", 00:32:03.800 "adrfam": "ipv4", 00:32:03.800 "trsvcid": "4420", 00:32:03.800 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:03.800 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:03.800 "prchk_reftag": false, 00:32:03.800 "prchk_guard": false, 00:32:03.800 "hdgst": false, 00:32:03.800 "ddgst": false, 00:32:03.800 "allow_unrecognized_csi": false, 00:32:03.800 "method": "bdev_nvme_attach_controller", 00:32:03.800 "req_id": 1 00:32:03.800 } 00:32:03.800 Got JSON-RPC error response 00:32:03.800 response: 00:32:03.800 { 00:32:03.800 "code": -5, 00:32:03.800 "message": "Input/output error" 00:32:03.800 } 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:03.800 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.060 request: 00:32:04.060 { 00:32:04.060 "name": "nvme0", 00:32:04.060 "trtype": "tcp", 00:32:04.060 "traddr": "10.0.0.1", 00:32:04.060 "adrfam": "ipv4", 00:32:04.060 "trsvcid": "4420", 00:32:04.060 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:04.060 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:04.060 "prchk_reftag": false, 00:32:04.060 "prchk_guard": false, 00:32:04.060 "hdgst": false, 00:32:04.060 "ddgst": false, 00:32:04.060 "dhchap_key": "key2", 00:32:04.060 "allow_unrecognized_csi": false, 00:32:04.060 "method": "bdev_nvme_attach_controller", 00:32:04.060 "req_id": 1 00:32:04.060 } 00:32:04.060 Got JSON-RPC error response 00:32:04.060 response: 00:32:04.060 { 00:32:04.060 "code": -5, 00:32:04.060 "message": "Input/output error" 00:32:04.060 } 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:04.060 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.061 request: 00:32:04.061 { 00:32:04.061 "name": "nvme0", 00:32:04.061 "trtype": "tcp", 00:32:04.061 "traddr": "10.0.0.1", 00:32:04.061 "adrfam": "ipv4", 00:32:04.061 "trsvcid": "4420", 00:32:04.061 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:04.061 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:04.061 "prchk_reftag": false, 00:32:04.061 "prchk_guard": false, 00:32:04.061 "hdgst": false, 00:32:04.061 "ddgst": false, 00:32:04.061 "dhchap_key": "key1", 00:32:04.061 "dhchap_ctrlr_key": "ckey2", 00:32:04.061 "allow_unrecognized_csi": false, 00:32:04.061 "method": "bdev_nvme_attach_controller", 00:32:04.061 "req_id": 1 00:32:04.061 } 00:32:04.061 Got JSON-RPC error response 00:32:04.061 response: 00:32:04.061 { 00:32:04.061 "code": -5, 00:32:04.061 "message": "Input/output error" 00:32:04.061 } 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.061 00:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.320 nvme0n1 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.320 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.579 request: 00:32:04.579 { 00:32:04.579 "name": "nvme0", 00:32:04.579 "dhchap_key": "key1", 00:32:04.579 "dhchap_ctrlr_key": "ckey2", 00:32:04.579 "method": "bdev_nvme_set_keys", 00:32:04.579 "req_id": 1 00:32:04.579 } 00:32:04.579 Got JSON-RPC error response 00:32:04.579 response: 00:32:04.579 { 00:32:04.579 "code": -13, 00:32:04.579 "message": "Permission denied" 00:32:04.579 } 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:04.579 00:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:05.515 00:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.515 00:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:05.515 00:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.515 00:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.515 00:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.515 00:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:32:05.515 00:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Q5MzRjYjdjMDc2YmM0YTI4OTFjZjdhYjhiMDcyNDZlYzU5NDBhZTNjZjdkZTA07NmM+A==: 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: ]] 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWY0YzI2MWU1MjE4MGUzMGU5M2E4NGNjNDcyNmZlMjg0NTRmZTlhYmU1ZjVjMjhmfmgXqA==: 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.892 nvme0n1 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzYyMWUxYjU1OTliNzIxOTRlNTY5NGIwMThkNTc0ZTIlvHRV: 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: ]] 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzZjZmU2MjA3MWE0ZDFmNjE2ZjVhMGU5ZWYwM2FiZTNRzr1U: 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.892 request: 00:32:06.892 { 00:32:06.892 "name": "nvme0", 00:32:06.892 "dhchap_key": "key2", 00:32:06.892 "dhchap_ctrlr_key": "ckey1", 00:32:06.892 "method": "bdev_nvme_set_keys", 00:32:06.892 "req_id": 1 00:32:06.892 } 00:32:06.892 Got JSON-RPC error response 00:32:06.892 response: 00:32:06.892 { 00:32:06.892 "code": -13, 00:32:06.892 "message": "Permission denied" 00:32:06.892 } 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:06.892 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:32:06.893 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:06.893 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:06.893 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:06.893 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.893 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:06.893 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.893 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.893 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.893 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:32:06.893 00:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:32:07.827 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.827 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:32:07.827 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.827 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.827 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.827 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:32:07.827 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:32:07.827 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:32:07.827 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:07.827 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:07.827 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.086 rmmod nvme_tcp 00:32:08.086 rmmod nvme_fabrics 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 501831 ']' 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 501831 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 501831 ']' 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 501831 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501831 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501831' 00:32:08.086 killing process with pid 501831 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 501831 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 501831 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:08.086 00:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:08.086 00:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.086 00:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.086 00:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.086 00:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.086 00:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:10.621 00:13:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:32:13.161 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:13.161 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:14.103 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:32:14.103 00:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Mb9 /tmp/spdk.key-null.7J0 /tmp/spdk.key-sha256.Xj0 /tmp/spdk.key-sha384.znm /tmp/spdk.key-sha512.N21 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log 00:32:14.361 00:13:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:32:16.904 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:16.904 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:16.904 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:17.163 00:32:17.163 real 0m56.100s 00:32:17.163 user 0m50.928s 00:32:17.163 sys 0m12.567s 00:32:17.163 00:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.163 00:13:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.163 ************************************ 00:32:17.163 END TEST nvmf_auth_host 00:32:17.163 ************************************ 00:32:17.163 00:13:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:17.163 00:13:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:17.163 00:13:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:17.163 00:13:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.163 00:13:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.163 ************************************ 00:32:17.163 START TEST nvmf_digest 00:32:17.163 ************************************ 00:32:17.163 00:13:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:17.163 * Looking for test storage... 00:32:17.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:32:17.163 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:17.163 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:32:17.163 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:17.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.423 --rc genhtml_branch_coverage=1 00:32:17.423 --rc genhtml_function_coverage=1 00:32:17.423 --rc genhtml_legend=1 00:32:17.423 --rc geninfo_all_blocks=1 00:32:17.423 --rc geninfo_unexecuted_blocks=1 00:32:17.423 00:32:17.423 ' 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:17.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.423 --rc genhtml_branch_coverage=1 00:32:17.423 --rc genhtml_function_coverage=1 00:32:17.423 --rc genhtml_legend=1 00:32:17.423 --rc geninfo_all_blocks=1 00:32:17.423 --rc geninfo_unexecuted_blocks=1 00:32:17.423 00:32:17.423 ' 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:17.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.423 --rc genhtml_branch_coverage=1 00:32:17.423 --rc genhtml_function_coverage=1 00:32:17.423 --rc genhtml_legend=1 00:32:17.423 --rc geninfo_all_blocks=1 00:32:17.423 --rc geninfo_unexecuted_blocks=1 00:32:17.423 00:32:17.423 ' 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:17.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.423 --rc genhtml_branch_coverage=1 00:32:17.423 --rc genhtml_function_coverage=1 00:32:17.423 --rc genhtml_legend=1 00:32:17.423 --rc geninfo_all_blocks=1 00:32:17.423 --rc geninfo_unexecuted_blocks=1 00:32:17.423 00:32:17.423 ' 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.423 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:17.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:32:17.424 00:13:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.996 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:23.997 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:23.997 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:23.997 Found net devices under 0000:86:00.0: cvl_0_0 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:23.997 Found net devices under 0000:86:00.1: cvl_0_1 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:23.997 00:13:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:23.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:32:23.997 00:32:23.997 --- 10.0.0.2 ping statistics --- 00:32:23.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.997 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:32:23.997 00:32:23.997 --- 10.0.0.1 ping statistics --- 00:32:23.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.997 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:23.997 ************************************ 00:32:23.997 START TEST nvmf_digest_clean 00:32:23.997 ************************************ 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=515974 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 515974 00:32:23.997 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 515974 ']' 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:23.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:23.998 [2024-12-10 00:13:58.233118] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:23.998 [2024-12-10 00:13:58.233176] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:23.998 [2024-12-10 00:13:58.313974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.998 [2024-12-10 00:13:58.353778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.998 [2024-12-10 00:13:58.353814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.998 [2024-12-10 00:13:58.353822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:23.998 [2024-12-10 00:13:58.353828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:23.998 [2024-12-10 00:13:58.353834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.998 [2024-12-10 00:13:58.354388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:23.998 null0 00:32:23.998 [2024-12-10 00:13:58.507112] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.998 [2024-12-10 00:13:58.531309] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=516089 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 516089 /var/tmp/bperf.sock 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 516089 ']' 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:23.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:23.998 [2024-12-10 00:13:58.583926] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:23.998 [2024-12-10 00:13:58.583968] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516089 ] 00:32:23.998 [2024-12-10 00:13:58.658564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.998 [2024-12-10 00:13:58.698125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:23.998 00:13:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:24.257 00:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:24.257 00:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:24.516 nvme0n1 00:32:24.516 00:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:24.516 00:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:24.774 Running I/O for 2 seconds... 00:32:26.650 24505.00 IOPS, 95.72 MiB/s [2024-12-09T23:14:01.586Z] 24797.50 IOPS, 96.87 MiB/s 00:32:26.650 Latency(us) 00:32:26.650 [2024-12-09T23:14:01.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.650 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:26.650 nvme0n1 : 2.00 24812.81 96.93 0.00 0.00 5152.98 2564.45 15386.71 00:32:26.650 [2024-12-09T23:14:01.586Z] =================================================================================================================== 00:32:26.650 [2024-12-09T23:14:01.586Z] Total : 24812.81 96.93 0.00 0.00 5152.98 2564.45 15386.71 00:32:26.650 { 00:32:26.650 "results": [ 00:32:26.650 { 00:32:26.650 "job": "nvme0n1", 00:32:26.650 "core_mask": "0x2", 00:32:26.650 "workload": "randread", 00:32:26.650 "status": "finished", 00:32:26.650 "queue_depth": 128, 00:32:26.650 "io_size": 4096, 00:32:26.650 "runtime": 2.004771, 00:32:26.650 "iops": 24812.80904402548, 00:32:26.650 "mibps": 96.92503532822452, 00:32:26.650 "io_failed": 0, 00:32:26.650 "io_timeout": 0, 00:32:26.650 "avg_latency_us": 5152.978021172752, 00:32:26.650 "min_latency_us": 2564.4521739130437, 00:32:26.650 "max_latency_us": 15386.713043478261 00:32:26.650 } 00:32:26.650 ], 00:32:26.650 "core_count": 1 00:32:26.650 } 00:32:26.650 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:26.650 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:26.650 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:26.650 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:26.650 | select(.opcode=="crc32c") 00:32:26.650 | "\(.module_name) \(.executed)"' 00:32:26.650 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 516089 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 516089 ']' 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 516089 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 516089 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 516089' 00:32:26.909 killing process with pid 516089 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 516089 00:32:26.909 Received shutdown signal, test time was about 2.000000 seconds 00:32:26.909 00:32:26.909 Latency(us) 00:32:26.909 [2024-12-09T23:14:01.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:26.909 [2024-12-09T23:14:01.845Z] =================================================================================================================== 00:32:26.909 [2024-12-09T23:14:01.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:26.909 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 516089 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=516569 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 516569 /var/tmp/bperf.sock 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 516569 ']' 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:27.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:27.168 00:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:27.168 [2024-12-10 00:14:01.999189] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:27.168 [2024-12-10 00:14:01.999239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516569 ] 00:32:27.168 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:27.168 Zero copy mechanism will not be used. 00:32:27.168 [2024-12-10 00:14:02.074178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.427 [2024-12-10 00:14:02.116457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.427 00:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.427 00:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:32:27.427 00:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:27.427 00:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:27.427 00:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:27.685 00:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:27.685 00:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:27.945 nvme0n1 00:32:27.945 00:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:27.945 00:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:28.204 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:28.204 Zero copy mechanism will not be used. 00:32:28.204 Running I/O for 2 seconds... 00:32:30.080 6081.00 IOPS, 760.12 MiB/s [2024-12-09T23:14:05.016Z] 5863.00 IOPS, 732.88 MiB/s 00:32:30.080 Latency(us) 00:32:30.080 [2024-12-09T23:14:05.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.080 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:30.080 nvme0n1 : 2.00 5863.59 732.95 0.00 0.00 2725.86 658.92 5727.28 00:32:30.080 [2024-12-09T23:14:05.016Z] =================================================================================================================== 00:32:30.080 [2024-12-09T23:14:05.016Z] Total : 5863.59 732.95 0.00 0.00 2725.86 658.92 5727.28 00:32:30.080 { 00:32:30.080 "results": [ 00:32:30.080 { 00:32:30.080 "job": "nvme0n1", 00:32:30.080 "core_mask": "0x2", 00:32:30.080 "workload": "randread", 00:32:30.080 "status": "finished", 00:32:30.080 "queue_depth": 16, 00:32:30.080 "io_size": 131072, 00:32:30.080 "runtime": 2.002528, 00:32:30.080 "iops": 5863.588424231771, 00:32:30.080 "mibps": 732.9485530289713, 00:32:30.080 "io_failed": 0, 00:32:30.080 "io_timeout": 0, 00:32:30.080 "avg_latency_us": 2725.864901172306, 00:32:30.080 "min_latency_us": 658.9217391304347, 00:32:30.080 "max_latency_us": 5727.276521739131 00:32:30.080 } 00:32:30.080 ], 00:32:30.080 "core_count": 1 00:32:30.080 } 00:32:30.080 00:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:30.080 00:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:30.080 00:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:30.080 00:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:30.080 | select(.opcode=="crc32c") 00:32:30.080 | "\(.module_name) \(.executed)"' 00:32:30.080 00:14:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 516569 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 516569 ']' 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 516569 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 516569 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 516569' 00:32:30.343 killing process with pid 516569 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 516569 00:32:30.343 Received shutdown signal, test time was about 2.000000 seconds 00:32:30.343 00:32:30.343 Latency(us) 00:32:30.343 [2024-12-09T23:14:05.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.343 [2024-12-09T23:14:05.279Z] =================================================================================================================== 00:32:30.343 [2024-12-09T23:14:05.279Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.343 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 516569 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=517186 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 517186 /var/tmp/bperf.sock 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 517186 ']' 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:30.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.604 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:30.604 [2024-12-10 00:14:05.435892] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:30.604 [2024-12-10 00:14:05.435943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517186 ] 00:32:30.604 [2024-12-10 00:14:05.513448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.863 [2024-12-10 00:14:05.553287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.863 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.863 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:32:30.863 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:30.863 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:30.863 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:31.122 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:31.122 00:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:31.381 nvme0n1 00:32:31.381 00:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:31.381 00:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:31.381 Running I/O for 2 seconds... 00:32:33.693 26678.00 IOPS, 104.21 MiB/s [2024-12-09T23:14:08.629Z] 26755.00 IOPS, 104.51 MiB/s 00:32:33.693 Latency(us) 00:32:33.693 [2024-12-09T23:14:08.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.693 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:33.693 nvme0n1 : 2.01 26755.75 104.51 0.00 0.00 4775.67 3604.48 10086.85 00:32:33.693 [2024-12-09T23:14:08.629Z] =================================================================================================================== 00:32:33.693 [2024-12-09T23:14:08.629Z] Total : 26755.75 104.51 0.00 0.00 4775.67 3604.48 10086.85 00:32:33.693 { 00:32:33.693 "results": [ 00:32:33.693 { 00:32:33.693 "job": "nvme0n1", 00:32:33.693 "core_mask": "0x2", 00:32:33.693 "workload": "randwrite", 00:32:33.693 "status": "finished", 00:32:33.693 "queue_depth": 128, 00:32:33.693 "io_size": 4096, 00:32:33.693 "runtime": 2.005924, 00:32:33.693 "iops": 26755.749470069655, 00:32:33.693 "mibps": 104.51464636745959, 00:32:33.693 "io_failed": 0, 00:32:33.693 "io_timeout": 0, 00:32:33.693 "avg_latency_us": 4775.666895245502, 00:32:33.693 "min_latency_us": 3604.48, 00:32:33.693 "max_latency_us": 10086.845217391305 00:32:33.693 } 00:32:33.693 ], 00:32:33.693 "core_count": 1 00:32:33.693 } 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:33.693 | select(.opcode=="crc32c") 00:32:33.693 | "\(.module_name) \(.executed)"' 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 517186 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 517186 ']' 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 517186 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 517186 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 517186' 00:32:33.693 killing process with pid 517186 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 517186 00:32:33.693 Received shutdown signal, test time was about 2.000000 seconds 00:32:33.693 00:32:33.693 Latency(us) 00:32:33.693 [2024-12-09T23:14:08.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.693 [2024-12-09T23:14:08.629Z] =================================================================================================================== 00:32:33.693 [2024-12-09T23:14:08.629Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:33.693 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 517186 00:32:33.952 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:33.952 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:33.952 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:33.952 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:33.952 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:33.952 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=517729 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 517729 /var/tmp/bperf.sock 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 517729 ']' 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:33.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:33.953 [2024-12-10 00:14:08.722300] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:33.953 [2024-12-10 00:14:08.722352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid517729 ] 00:32:33.953 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:33.953 Zero copy mechanism will not be used. 00:32:33.953 [2024-12-10 00:14:08.798600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.953 [2024-12-10 00:14:08.835435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:33.953 00:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:34.524 00:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:34.524 00:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:34.524 nvme0n1 00:32:34.524 00:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:34.524 00:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:34.785 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:34.785 Zero copy mechanism will not be used. 00:32:34.785 Running I/O for 2 seconds... 00:32:36.659 6355.00 IOPS, 794.38 MiB/s [2024-12-09T23:14:11.595Z] 6329.00 IOPS, 791.12 MiB/s 00:32:36.659 Latency(us) 00:32:36.659 [2024-12-09T23:14:11.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.659 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:36.659 nvme0n1 : 2.00 6325.38 790.67 0.00 0.00 2524.56 1759.50 11283.59 00:32:36.659 [2024-12-09T23:14:11.595Z] =================================================================================================================== 00:32:36.659 [2024-12-09T23:14:11.595Z] Total : 6325.38 790.67 0.00 0.00 2524.56 1759.50 11283.59 00:32:36.659 { 00:32:36.659 "results": [ 00:32:36.659 { 00:32:36.659 "job": "nvme0n1", 00:32:36.659 "core_mask": "0x2", 00:32:36.659 "workload": "randwrite", 00:32:36.659 "status": "finished", 00:32:36.659 "queue_depth": 16, 00:32:36.659 "io_size": 131072, 00:32:36.659 "runtime": 2.004307, 00:32:36.659 "iops": 6325.378297835611, 00:32:36.659 "mibps": 790.6722872294514, 00:32:36.659 "io_failed": 0, 00:32:36.659 "io_timeout": 0, 00:32:36.659 "avg_latency_us": 2524.5625498467048, 00:32:36.659 "min_latency_us": 1759.4991304347825, 00:32:36.659 "max_latency_us": 11283.589565217391 00:32:36.659 } 00:32:36.659 ], 00:32:36.659 "core_count": 1 00:32:36.659 } 00:32:36.659 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:36.659 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:36.659 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:36.659 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:36.659 | select(.opcode=="crc32c") 00:32:36.659 | "\(.module_name) \(.executed)"' 00:32:36.659 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 517729 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 517729 ']' 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 517729 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 517729 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 517729' 00:32:36.918 killing process with pid 517729 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 517729 00:32:36.918 Received shutdown signal, test time was about 2.000000 seconds 00:32:36.918 00:32:36.918 Latency(us) 00:32:36.918 [2024-12-09T23:14:11.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.918 [2024-12-09T23:14:11.854Z] =================================================================================================================== 00:32:36.918 [2024-12-09T23:14:11.854Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:36.918 00:14:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 517729 00:32:37.177 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 515974 00:32:37.177 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 515974 ']' 00:32:37.177 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 515974 00:32:37.177 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:32:37.177 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.177 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 515974 00:32:37.177 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:37.177 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:37.177 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 515974' 00:32:37.177 killing process with pid 515974 00:32:37.177 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 515974 00:32:37.177 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 515974 00:32:37.436 00:32:37.436 real 0m14.045s 00:32:37.436 user 0m26.971s 00:32:37.436 sys 0m4.556s 00:32:37.436 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:37.436 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:37.436 ************************************ 00:32:37.436 END TEST nvmf_digest_clean 00:32:37.436 ************************************ 00:32:37.436 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:37.436 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:37.436 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:37.436 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:37.436 ************************************ 00:32:37.436 START TEST nvmf_digest_error 00:32:37.436 ************************************ 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=518236 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 518236 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 518236 ']' 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.437 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.437 [2024-12-10 00:14:12.344308] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:37.437 [2024-12-10 00:14:12.344354] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.696 [2024-12-10 00:14:12.422648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.696 [2024-12-10 00:14:12.461985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:37.696 [2024-12-10 00:14:12.462022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:37.696 [2024-12-10 00:14:12.462029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:37.696 [2024-12-10 00:14:12.462035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:37.696 [2024-12-10 00:14:12.462041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:37.696 [2024-12-10 00:14:12.462590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.696 [2024-12-10 00:14:12.535049] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.696 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.696 null0 00:32:37.696 [2024-12-10 00:14:12.626367] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.955 [2024-12-10 00:14:12.650553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=518434 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 518434 /var/tmp/bperf.sock 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 518434 ']' 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:37.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.956 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:37.956 [2024-12-10 00:14:12.703966] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:37.956 [2024-12-10 00:14:12.704006] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518434 ] 00:32:37.956 [2024-12-10 00:14:12.780732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.956 [2024-12-10 00:14:12.821675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.214 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.214 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:38.214 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:38.214 00:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:38.214 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:38.214 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.214 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:38.214 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.214 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:38.214 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:38.782 nvme0n1 00:32:38.782 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:38.782 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.782 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:38.782 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.783 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:38.783 00:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:38.783 Running I/O for 2 seconds... 00:32:38.783 [2024-12-10 00:14:13.662778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:38.783 [2024-12-10 00:14:13.662810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.783 [2024-12-10 00:14:13.662821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.783 [2024-12-10 00:14:13.675254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:38.783 [2024-12-10 00:14:13.675280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.783 [2024-12-10 00:14:13.675289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.783 [2024-12-10 00:14:13.683792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:38.783 [2024-12-10 00:14:13.683814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.783 [2024-12-10 00:14:13.683822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.783 [2024-12-10 00:14:13.694495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:38.783 [2024-12-10 00:14:13.694517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.783 [2024-12-10 00:14:13.694525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.783 [2024-12-10 00:14:13.704216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:38.783 [2024-12-10 00:14:13.704245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.783 [2024-12-10 00:14:13.704253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:38.783 [2024-12-10 00:14:13.714160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:38.783 [2024-12-10 00:14:13.714181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.783 [2024-12-10 00:14:13.714190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.723041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.723064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.723073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.735614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.735637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.735646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.747605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.747626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.747639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.759759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.759781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.759790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.770447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.770469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.770478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.779482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.779504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.779512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.789098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.789120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.789129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.799615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.799636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.799644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.808547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.808574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.808582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.818117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.818138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.818147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.827766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.827786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.827794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.836805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.836830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.836838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.845638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.845659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.845667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.855595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.855616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.855624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.046 [2024-12-10 00:14:13.866041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.046 [2024-12-10 00:14:13.866062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.046 [2024-12-10 00:14:13.866071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.047 [2024-12-10 00:14:13.874697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.047 [2024-12-10 00:14:13.874717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.047 [2024-12-10 00:14:13.874725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.047 [2024-12-10 00:14:13.884169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.047 [2024-12-10 00:14:13.884189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.047 [2024-12-10 00:14:13.884197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.047 [2024-12-10 00:14:13.894227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.047 [2024-12-10 00:14:13.894248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.047 [2024-12-10 00:14:13.894257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.047 [2024-12-10 00:14:13.903508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.047 [2024-12-10 00:14:13.903529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.047 [2024-12-10 00:14:13.903537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.047 [2024-12-10 00:14:13.913895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.047 [2024-12-10 00:14:13.913916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.047 [2024-12-10 00:14:13.913924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.047 [2024-12-10 00:14:13.923796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.047 [2024-12-10 00:14:13.923817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.047 [2024-12-10 00:14:13.923826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.047 [2024-12-10 00:14:13.933579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.047 [2024-12-10 00:14:13.933601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.047 [2024-12-10 00:14:13.933609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.047 [2024-12-10 00:14:13.944816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.047 [2024-12-10 00:14:13.944838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.047 [2024-12-10 00:14:13.944846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.047 [2024-12-10 00:14:13.953180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.047 [2024-12-10 00:14:13.953200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.047 [2024-12-10 00:14:13.953209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.047 [2024-12-10 00:14:13.963580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.047 [2024-12-10 00:14:13.963599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.047 [2024-12-10 00:14:13.963608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.047 [2024-12-10 00:14:13.972379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.047 [2024-12-10 00:14:13.972400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.047 [2024-12-10 00:14:13.972409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.306 [2024-12-10 00:14:13.982698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.306 [2024-12-10 00:14:13.982719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.306 [2024-12-10 00:14:13.982727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:13.992445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:13.992466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:13.992474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.001923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.001942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.001953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.011771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.011792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.011799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.020547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.020568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.020577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.031068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.031089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.031098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.041449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.041470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.041478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.051806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.051826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.051834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.060139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.060176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.060185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.072601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.072622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.072631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.081260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.081280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.081288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.091775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.091796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.091804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.101407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.101426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.101434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.110391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.110411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.110419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.120622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.120642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.120650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.129792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.129813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.129821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.140190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.140212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.140220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.148083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.148104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.148112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.159828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.159849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.159857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.170803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.170824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.170835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.182436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.182456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.182465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.191402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.191422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.191430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.204296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.204316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.204324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.217024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.217044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.217052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.225210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.225231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.225239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.307 [2024-12-10 00:14:14.235249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.307 [2024-12-10 00:14:14.235269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.307 [2024-12-10 00:14:14.235278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.245800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.245820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.245829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.257312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.257332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.257340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.265645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.265668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.265676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.277547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.277567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.277575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.285877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.285897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.285905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.297378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.297398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.297406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.310475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.310495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.310504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.322763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.322783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.322790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.331562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.331583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.331592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.342081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.342102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.342110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.352210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.352230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.352238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.361545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.361566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.361574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.373939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.373960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.373968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.382316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.382336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.382344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.393716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.393737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.393745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.404856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.404877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.404885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.414141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.414165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.567 [2024-12-10 00:14:14.414174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.567 [2024-12-10 00:14:14.426666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.567 [2024-12-10 00:14:14.426686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.568 [2024-12-10 00:14:14.426694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.568 [2024-12-10 00:14:14.436458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.568 [2024-12-10 00:14:14.436479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.568 [2024-12-10 00:14:14.436487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.568 [2024-12-10 00:14:14.444944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.568 [2024-12-10 00:14:14.444964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.568 [2024-12-10 00:14:14.444975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.568 [2024-12-10 00:14:14.454728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.568 [2024-12-10 00:14:14.454748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.568 [2024-12-10 00:14:14.454756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.568 [2024-12-10 00:14:14.466131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.568 [2024-12-10 00:14:14.466151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.568 [2024-12-10 00:14:14.466164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.568 [2024-12-10 00:14:14.476376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.568 [2024-12-10 00:14:14.476397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.568 [2024-12-10 00:14:14.476405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.568 [2024-12-10 00:14:14.484734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.568 [2024-12-10 00:14:14.484753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.568 [2024-12-10 00:14:14.484761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.568 [2024-12-10 00:14:14.495388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.568 [2024-12-10 00:14:14.495408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.568 [2024-12-10 00:14:14.495416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.827 [2024-12-10 00:14:14.507193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.827 [2024-12-10 00:14:14.507213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.827 [2024-12-10 00:14:14.507221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.827 [2024-12-10 00:14:14.518234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.827 [2024-12-10 00:14:14.518254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.827 [2024-12-10 00:14:14.518262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.827 [2024-12-10 00:14:14.526427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.827 [2024-12-10 00:14:14.526447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.526455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.537953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.537973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.537981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.547879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.547900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.547907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.557375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.557395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.557404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.565620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.565640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.565648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.575917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.575937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.575945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.587594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.587614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.587622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.597911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.597931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.597940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.606260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.606280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.606288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.615831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.615851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.615862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.625027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.625047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.625055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.634379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.634399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.634408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.643643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.643662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.643671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 25125.00 IOPS, 98.14 MiB/s [2024-12-09T23:14:14.764Z] [2024-12-10 00:14:14.653558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.653579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.653587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.664084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.664105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.664113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.676408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.676429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.676437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.686426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.686446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.686454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.695813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.695833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.695841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.707139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.707168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.707178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.715539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.715560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.715569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.725367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.725387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.725395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.737719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.737740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.737748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.828 [2024-12-10 00:14:14.747250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.828 [2024-12-10 00:14:14.747270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.828 [2024-12-10 00:14:14.747278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:39.829 [2024-12-10 00:14:14.756684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:39.829 [2024-12-10 00:14:14.756705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:39.829 [2024-12-10 00:14:14.756713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.091 [2024-12-10 00:14:14.765430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.091 [2024-12-10 00:14:14.765450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.765459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.774841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.774861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.774870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.784596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.784615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.784623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.794296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.794316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.794324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.803351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.803371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.803379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.812705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.812724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.812732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.822084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.822104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.822112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.831447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.831466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.831474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.840056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.840076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.840084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.850297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.850317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.850326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.859825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.859845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.859853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.868720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.868741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.868752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.879971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.879991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.880000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.889740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.889760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.889769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.900343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.900363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.900371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.908429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.908449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.908457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.919167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.919201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.919210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.932116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.092 [2024-12-10 00:14:14.932136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.092 [2024-12-10 00:14:14.932144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.092 [2024-12-10 00:14:14.944379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.093 [2024-12-10 00:14:14.944401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.093 [2024-12-10 00:14:14.944409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.093 [2024-12-10 00:14:14.956685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.093 [2024-12-10 00:14:14.956707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.093 [2024-12-10 00:14:14.956716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.093 [2024-12-10 00:14:14.964997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.093 [2024-12-10 00:14:14.965017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.093 [2024-12-10 00:14:14.965026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.093 [2024-12-10 00:14:14.975985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.093 [2024-12-10 00:14:14.976005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.093 [2024-12-10 00:14:14.976013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.093 [2024-12-10 00:14:14.984637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.093 [2024-12-10 00:14:14.984657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.093 [2024-12-10 00:14:14.984665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.093 [2024-12-10 00:14:14.995612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.093 [2024-12-10 00:14:14.995632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.093 [2024-12-10 00:14:14.995640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.093 [2024-12-10 00:14:15.004752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.093 [2024-12-10 00:14:15.004772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.093 [2024-12-10 00:14:15.004780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.093 [2024-12-10 00:14:15.013197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.093 [2024-12-10 00:14:15.013216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.093 [2024-12-10 00:14:15.013225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.093 [2024-12-10 00:14:15.023929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.093 [2024-12-10 00:14:15.023950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.093 [2024-12-10 00:14:15.023958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.357 [2024-12-10 00:14:15.035538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.357 [2024-12-10 00:14:15.035559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.357 [2024-12-10 00:14:15.035567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.357 [2024-12-10 00:14:15.044936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.357 [2024-12-10 00:14:15.044956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.357 [2024-12-10 00:14:15.044967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.357 [2024-12-10 00:14:15.056214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.357 [2024-12-10 00:14:15.056242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.357 [2024-12-10 00:14:15.056251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.357 [2024-12-10 00:14:15.065343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.357 [2024-12-10 00:14:15.065363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.357 [2024-12-10 00:14:15.065372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.357 [2024-12-10 00:14:15.075220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.357 [2024-12-10 00:14:15.075239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.357 [2024-12-10 00:14:15.075248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.357 [2024-12-10 00:14:15.084025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.357 [2024-12-10 00:14:15.084044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.357 [2024-12-10 00:14:15.084052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.094521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.094542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.094550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.103631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.103651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.103659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.114208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.114228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.114236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.123613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.123633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.123641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.131920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.131955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.131964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.144093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.144113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.144121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.156016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.156037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.156045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.168822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.168845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.168855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.179156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.179184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.179192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.187416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.187437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.187445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.198194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.198216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.198224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.208136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.208171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.208180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.220639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.220661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.220669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.232555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.232576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.232585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.240649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.240669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.240677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.252762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.252783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.252791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.262890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.262910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.262917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.271780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.271800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.271807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.358 [2024-12-10 00:14:15.281606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.358 [2024-12-10 00:14:15.281628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.358 [2024-12-10 00:14:15.281636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.291421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.291442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.291450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.301282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.301303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.301310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.310437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.310457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.310469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.320186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.320206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.320215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.329833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.329853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.329861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.338475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.338495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.338504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.349327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.349348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.349356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.360203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.360224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.360232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.368338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.368358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.368366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.380607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.380629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.380637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.392443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.392463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.392471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.618 [2024-12-10 00:14:15.400899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.618 [2024-12-10 00:14:15.400926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.618 [2024-12-10 00:14:15.400934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.412306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.412328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.412336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.424042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.424064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.424072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.433857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.433878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.433886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.441503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.441523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.441532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.451507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.451527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.451535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.461597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.461618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.461626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.470152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.470179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.470187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.480819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.480840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.480851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.491364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.491384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.491391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.501746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.501766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.501774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.510004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.510025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.510033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.522086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.522106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.522114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.532102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.532123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.532131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.541023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.541044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.541052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.619 [2024-12-10 00:14:15.550752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.619 [2024-12-10 00:14:15.550773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.619 [2024-12-10 00:14:15.550781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.878 [2024-12-10 00:14:15.560007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.878 [2024-12-10 00:14:15.560027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.878 [2024-12-10 00:14:15.560034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.878 [2024-12-10 00:14:15.569390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.878 [2024-12-10 00:14:15.569414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.878 [2024-12-10 00:14:15.569423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.878 [2024-12-10 00:14:15.580044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.878 [2024-12-10 00:14:15.580064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.878 [2024-12-10 00:14:15.580071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.878 [2024-12-10 00:14:15.590005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.878 [2024-12-10 00:14:15.590027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.878 [2024-12-10 00:14:15.590035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.878 [2024-12-10 00:14:15.598301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.878 [2024-12-10 00:14:15.598322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.878 [2024-12-10 00:14:15.598330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.879 [2024-12-10 00:14:15.610317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.879 [2024-12-10 00:14:15.610337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.879 [2024-12-10 00:14:15.610345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.879 [2024-12-10 00:14:15.621407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.879 [2024-12-10 00:14:15.621427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.879 [2024-12-10 00:14:15.621435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.879 [2024-12-10 00:14:15.629889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.879 [2024-12-10 00:14:15.629908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.879 [2024-12-10 00:14:15.629917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.879 [2024-12-10 00:14:15.642300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.879 [2024-12-10 00:14:15.642321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.879 [2024-12-10 00:14:15.642329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.879 [2024-12-10 00:14:15.652797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a0a1a0) 00:32:40.879 [2024-12-10 00:14:15.652818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:40.879 [2024-12-10 00:14:15.652826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:40.879 25239.00 IOPS, 98.59 MiB/s 00:32:40.879 Latency(us) 00:32:40.879 [2024-12-09T23:14:15.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.879 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:40.879 nvme0n1 : 2.00 25262.30 98.68 0.00 0.00 5060.52 2664.18 17438.27 00:32:40.879 [2024-12-09T23:14:15.815Z] =================================================================================================================== 00:32:40.879 [2024-12-09T23:14:15.815Z] Total : 25262.30 98.68 0.00 0.00 5060.52 2664.18 17438.27 00:32:40.879 { 00:32:40.879 "results": [ 00:32:40.879 { 00:32:40.879 "job": "nvme0n1", 00:32:40.879 "core_mask": "0x2", 00:32:40.879 "workload": "randread", 00:32:40.879 "status": "finished", 00:32:40.879 "queue_depth": 128, 00:32:40.879 "io_size": 4096, 00:32:40.879 "runtime": 2.004964, 00:32:40.879 "iops": 25262.298973946665, 00:32:40.879 "mibps": 98.68085536697916, 00:32:40.879 "io_failed": 0, 00:32:40.879 "io_timeout": 0, 00:32:40.879 "avg_latency_us": 5060.515721773467, 00:32:40.879 "min_latency_us": 2664.1808695652176, 00:32:40.879 "max_latency_us": 17438.274782608696 00:32:40.879 } 00:32:40.879 ], 00:32:40.879 "core_count": 1 00:32:40.879 } 00:32:40.879 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:40.879 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:40.879 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:40.879 | .driver_specific 00:32:40.879 | .nvme_error 00:32:40.879 | .status_code 00:32:40.879 | .command_transient_transport_error' 00:32:40.879 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 )) 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 518434 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 518434 ']' 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 518434 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 518434 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 518434' 00:32:41.138 killing process with pid 518434 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 518434 00:32:41.138 Received shutdown signal, test time was about 2.000000 seconds 00:32:41.138 00:32:41.138 Latency(us) 00:32:41.138 [2024-12-09T23:14:16.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.138 [2024-12-09T23:14:16.074Z] =================================================================================================================== 00:32:41.138 [2024-12-09T23:14:16.074Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:41.138 00:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 518434 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=518946 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 518946 /var/tmp/bperf.sock 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 518946 ']' 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:41.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:41.398 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:41.398 [2024-12-10 00:14:16.140331] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:41.398 [2024-12-10 00:14:16.140380] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518946 ] 00:32:41.398 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:41.398 Zero copy mechanism will not be used. 00:32:41.398 [2024-12-10 00:14:16.215865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.398 [2024-12-10 00:14:16.252719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.657 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:41.657 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:41.657 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:41.657 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:41.657 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:41.657 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.657 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:41.657 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.657 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.657 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.916 nvme0n1 00:32:42.175 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:42.175 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.175 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:42.175 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.175 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:42.175 00:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:42.175 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:42.175 Zero copy mechanism will not be used. 00:32:42.175 Running I/O for 2 seconds... 00:32:42.175 [2024-12-10 00:14:16.970648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.175 [2024-12-10 00:14:16.970683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.175 [2024-12-10 00:14:16.970694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.175 [2024-12-10 00:14:16.975909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.175 [2024-12-10 00:14:16.975933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.175 [2024-12-10 00:14:16.975942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.175 [2024-12-10 00:14:16.981129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.175 [2024-12-10 00:14:16.981153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.175 [2024-12-10 00:14:16.981168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.175 [2024-12-10 00:14:16.986352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.175 [2024-12-10 00:14:16.986374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.175 [2024-12-10 00:14:16.986383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.175 [2024-12-10 00:14:16.991621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.175 [2024-12-10 00:14:16.991643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.175 [2024-12-10 00:14:16.991652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.175 [2024-12-10 00:14:16.996884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.175 [2024-12-10 00:14:16.996906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.175 [2024-12-10 00:14:16.996914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.175 [2024-12-10 00:14:17.002119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.175 [2024-12-10 00:14:17.002140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.175 [2024-12-10 00:14:17.002148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.175 [2024-12-10 00:14:17.007456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.175 [2024-12-10 00:14:17.007479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.175 [2024-12-10 00:14:17.007491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.175 [2024-12-10 00:14:17.011138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.175 [2024-12-10 00:14:17.011165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.175 [2024-12-10 00:14:17.011175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.175 [2024-12-10 00:14:17.015499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.175 [2024-12-10 00:14:17.015520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.175 [2024-12-10 00:14:17.015530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.175 [2024-12-10 00:14:17.020786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.175 [2024-12-10 00:14:17.020808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.020816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.025999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.026020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.026028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.031328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.031350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.031358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.036573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.036596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.036604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.041698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.041719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.041727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.046971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.046993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.047001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.052408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.052429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.052437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.057981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.058003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.058011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.063357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.063378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.063386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.068911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.068932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.068940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.074383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.074404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.074412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.079958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.079979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.079986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.085519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.085543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.085552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.090914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.090935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.090944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.096389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.096410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.096422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.101872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.101894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.101902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.176 [2024-12-10 00:14:17.107396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.176 [2024-12-10 00:14:17.107418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.176 [2024-12-10 00:14:17.107427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.112843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.112866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.112874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.118174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.118195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.118203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.123392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.123414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.123422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.128640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.128661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.128669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.133873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.133894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.133902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.139433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.139455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.139463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.145353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.145378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.145387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.151222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.151244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.151252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.156935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.156957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.156965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.162615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.162637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.162645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.168292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.168314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.168322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.173702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.173724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.173732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.179102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.179128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.179136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.185462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.185484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.185492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.193492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.193515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.193524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.200849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.200872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.200880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.208038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.208061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.208070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.214320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.214343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.214351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.219765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.437 [2024-12-10 00:14:17.219787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.437 [2024-12-10 00:14:17.219795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.437 [2024-12-10 00:14:17.225185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.225207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.225215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.231000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.231023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.231032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.236426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.236448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.236456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.242066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.242089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.242096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.247926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.247946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.247958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.253565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.253586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.253594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.259123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.259145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.259153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.264605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.264627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.264635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.269586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.269608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.269616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.274995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.275016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.275024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.280309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.280331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.280339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.285826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.285848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.285856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.291256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.291277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.291286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.296740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.296765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.296773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.300444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.300464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.300473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.304709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.304730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.304738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.310103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.310124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.310132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.315651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.315673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.315681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.321035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.321056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.321064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.326564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.326585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.326593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.332050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.332071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.332079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.337243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.337265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.337273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.342405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.342427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.342435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.347827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.347848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.347856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.353615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.353637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.353645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.359415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.359437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.359447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.438 [2024-12-10 00:14:17.364919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.438 [2024-12-10 00:14:17.364941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.438 [2024-12-10 00:14:17.364950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.698 [2024-12-10 00:14:17.370759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.698 [2024-12-10 00:14:17.370780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.698 [2024-12-10 00:14:17.370789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.698 [2024-12-10 00:14:17.376373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.698 [2024-12-10 00:14:17.376395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.376403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.381895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.381916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.381924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.387691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.387712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.387724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.393103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.393124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.393132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.398528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.398550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.398558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.404338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.404361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.404369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.409717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.409739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.409747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.415258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.415280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.415288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.420924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.420944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.420952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.426600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.426622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.426630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.432430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.432452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.432460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.438054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.438077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.438085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.443683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.443704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.443711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.449183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.449203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.449211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.454477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.454498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.454507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.458895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.458916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.458924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.464082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.464103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.464112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.469376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.469395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.469403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.474660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.474681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.474689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.479923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.479944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.479956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.485224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.485245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.485253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.490604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.490625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.490633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.495884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.495905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.495913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.501342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.501363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.501371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.506763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.506784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.506792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.512243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.512264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.512273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.516763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.516785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.516793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.520138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.699 [2024-12-10 00:14:17.520165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.699 [2024-12-10 00:14:17.520174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.699 [2024-12-10 00:14:17.525384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.525409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.525417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.530443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.530464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.530472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.535479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.535500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.535508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.540778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.540797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.540805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.546086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.546106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.546114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.551348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.551368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.551376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.556806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.556828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.556836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.562274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.562295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.562303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.567644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.567664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.567671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.572921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.572943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.572951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.578371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.578391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.578399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.583804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.583825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.583834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.589500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.589521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.589529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.594933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.594953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.594961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.600001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.600022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.600030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.605650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.605670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.605678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.611061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.611082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.611091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.616456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.616476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.616488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.621602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.621623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.621631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.700 [2024-12-10 00:14:17.626781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.700 [2024-12-10 00:14:17.626802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.700 [2024-12-10 00:14:17.626809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.960 [2024-12-10 00:14:17.632061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.632083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.632091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.637467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.637488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.637496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.643105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.643125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.643134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.648343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.648364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.648372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.653487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.653509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.653517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.659574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.659594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.659602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.665021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.665046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.665054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.670465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.670486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.670494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.675869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.675889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.675897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.681263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.681284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.681292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.686696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.686718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.686726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.692029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.692050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.692058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.697428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.697449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.697457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.702992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.703013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.703021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.708496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.708517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.708529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.714069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.714090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.714098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.719447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.719468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.719476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.725117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.725138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.725146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.730527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.730548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.730555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.736052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.736073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.736082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.741444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.741465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.741473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.746849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.746871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.746879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.752385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.752406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.752414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.757970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.757996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.758004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.763527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.763548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.763556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.768986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.769007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.769015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.774167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.774188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.961 [2024-12-10 00:14:17.774196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.961 [2024-12-10 00:14:17.779615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.961 [2024-12-10 00:14:17.779637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.779645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.785022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.785043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.785051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.790587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.790607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.790616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.795879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.795900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.795909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.801221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.801242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.801250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.806680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.806701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.806709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.812146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.812173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.812180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.816676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.816697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.816705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.820101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.820121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.820130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.825456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.825477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.825485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.830939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.830959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.830967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.836287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.836308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.836316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.841681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.841702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.841710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.847119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.847140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.847152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.852307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.852327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.852336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.857768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.857789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.857798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.863069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.863090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.863098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.868487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.868507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.868516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.873861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.873882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.873891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.879356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.879376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.879383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.885164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.885184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.885192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:42.962 [2024-12-10 00:14:17.890578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:42.962 [2024-12-10 00:14:17.890600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:42.962 [2024-12-10 00:14:17.890607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.896015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.896042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.896049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.901364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.901386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.901394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.906881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.906903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.906911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.912220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.912240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.912248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.918248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.918270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.918279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.924352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.924374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.924383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.929799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.929820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.929829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.934612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.934631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.934639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.939806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.939827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.939835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.945234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.945256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.945265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.950531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.950553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.950561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.955887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.955908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.955917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.961279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.961301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.961309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.966590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.966611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.966619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.223 5717.00 IOPS, 714.62 MiB/s [2024-12-09T23:14:18.159Z] [2024-12-10 00:14:17.973029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.973051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.973059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.978268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.978289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.978298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.983636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.983656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.983664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.988986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.989007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.989019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.994479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.994500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.994508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:17.999956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:17.999977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:17.999984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:18.005608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:18.005630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.223 [2024-12-10 00:14:18.005638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.223 [2024-12-10 00:14:18.010996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.223 [2024-12-10 00:14:18.011020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.011028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.016512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.016533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.016541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.021942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.021965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.021973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.027369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.027391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.027399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.032752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.032773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.032782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.038190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.038211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.038219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.043616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.043639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.043646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.049116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.049137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.049145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.054600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.054622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.054630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.060043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.060064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.060073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.065387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.065409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.065418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.070611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.070634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.070641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.075834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.075856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.075864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.081003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.081025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.081037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.086211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.086234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.086241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.091375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.091398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.091407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.096574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.096595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.096603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.101754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.101775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.101783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.106969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.106990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.106998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.112213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.112234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.112242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.117420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.117442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.117450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.122643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.122664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.122672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.127831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.127856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.127864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.132985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.133006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.133015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.138206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.138227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.138235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.143354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.224 [2024-12-10 00:14:18.143376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.224 [2024-12-10 00:14:18.143384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.224 [2024-12-10 00:14:18.148577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.225 [2024-12-10 00:14:18.148599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.225 [2024-12-10 00:14:18.148607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.225 [2024-12-10 00:14:18.153926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.225 [2024-12-10 00:14:18.153948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.225 [2024-12-10 00:14:18.153956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.159265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.159288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.159296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.164444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.164467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.164475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.169655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.169677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.169685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.174890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.174911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.174920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.180097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.180118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.180126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.185278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.185300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.185308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.190451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.190473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.190481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.195649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.195670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.195678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.200873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.200894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.200902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.206047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.206068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.206076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.211186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.211207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.211215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.216345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.216367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.216378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.221573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.221595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.221603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.226834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.226855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.226863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.232089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.232111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.232119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.237369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.237391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.237399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.242637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.242659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.242667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.247915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.247938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.247947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.253154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.253182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.253191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.258433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.258454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.258462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.263705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.263730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.263738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.268947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.268968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.268977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.274257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.274281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.274289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.279620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.279641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.487 [2024-12-10 00:14:18.279650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.487 [2024-12-10 00:14:18.284894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.487 [2024-12-10 00:14:18.284915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.284923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.290168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.290190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.290198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.295431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.295452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.295461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.300684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.300706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.300714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.305977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.305997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.306005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.311228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.311249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.311257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.316492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.316514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.316522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.321761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.321781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.321791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.326978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.326999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.327007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.332257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.332278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.332286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.337511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.337532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.337540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.342749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.342771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.342779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.348009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.348030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.348038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.353234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.353258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.353267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.358467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.358488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.358496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.363684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.363705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.363713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.368940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.368961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.368970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.374192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.374213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.374220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.379439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.379461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.379468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.384708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.384730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.384738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.389961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.389982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.389990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.488 [2024-12-10 00:14:18.395173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.488 [2024-12-10 00:14:18.395194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.488 [2024-12-10 00:14:18.395201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.489 [2024-12-10 00:14:18.400374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.489 [2024-12-10 00:14:18.400396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.489 [2024-12-10 00:14:18.400404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.489 [2024-12-10 00:14:18.405580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.489 [2024-12-10 00:14:18.405603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.489 [2024-12-10 00:14:18.405611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.489 [2024-12-10 00:14:18.410782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.489 [2024-12-10 00:14:18.410803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.489 [2024-12-10 00:14:18.410812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.489 [2024-12-10 00:14:18.416002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.489 [2024-12-10 00:14:18.416023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.489 [2024-12-10 00:14:18.416032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.749 [2024-12-10 00:14:18.421320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.749 [2024-12-10 00:14:18.421342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.749 [2024-12-10 00:14:18.421350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.749 [2024-12-10 00:14:18.426619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.749 [2024-12-10 00:14:18.426641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.749 [2024-12-10 00:14:18.426649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.749 [2024-12-10 00:14:18.431797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.749 [2024-12-10 00:14:18.431818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.749 [2024-12-10 00:14:18.431826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.749 [2024-12-10 00:14:18.436949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.749 [2024-12-10 00:14:18.436969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.749 [2024-12-10 00:14:18.436977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.749 [2024-12-10 00:14:18.442112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.749 [2024-12-10 00:14:18.442133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.749 [2024-12-10 00:14:18.442145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.749 [2024-12-10 00:14:18.447274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.749 [2024-12-10 00:14:18.447296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.749 [2024-12-10 00:14:18.447305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.749 [2024-12-10 00:14:18.452504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.749 [2024-12-10 00:14:18.452526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.749 [2024-12-10 00:14:18.452534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.749 [2024-12-10 00:14:18.457722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.457744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.457751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.462922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.462943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.462951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.468153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.468182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.468190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.473378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.473399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.473407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.478575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.478596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.478605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.483812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.483833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.483841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.489057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.489082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.489090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.494343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.494364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.494372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.499617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.499639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.499647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.504869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.504890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.504898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.510070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.510091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.510099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.515263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.515283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.515291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.520479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.520500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.520508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.525670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.525691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.525699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.530867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.530888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.530897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.537066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.537088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.537096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.542754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.542776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.542784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.547907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.547928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.547937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.553056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.553077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.553085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.558245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.558266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.558274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.563428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.563449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.563458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.568593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.568614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.568623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.573794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.573815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.573823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.580383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.580404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.580419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.585690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.585711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.585719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.590852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.590874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.590882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.596070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.596090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.596098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.750 [2024-12-10 00:14:18.601224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.750 [2024-12-10 00:14:18.601245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.750 [2024-12-10 00:14:18.601253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.606333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.606353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.606361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.611553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.611574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.611582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.616779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.616799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.616808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.622004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.622025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.622033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.627228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.627248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.627256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.632429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.632450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.632458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.637668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.637689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.637697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.642890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.642910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.642918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.648119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.648140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.648148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.653327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.653348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.653357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.658571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.658592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.658600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.663762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.663783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.663791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.668968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.668989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.669001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.674258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.674279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.674287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:43.751 [2024-12-10 00:14:18.679543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:43.751 [2024-12-10 00:14:18.679564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:43.751 [2024-12-10 00:14:18.679572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.014 [2024-12-10 00:14:18.684877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.014 [2024-12-10 00:14:18.684898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.684906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.690175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.015 [2024-12-10 00:14:18.690197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.690205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.695407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.015 [2024-12-10 00:14:18.695428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.695436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.700596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.015 [2024-12-10 00:14:18.700618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.700626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.705791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.015 [2024-12-10 00:14:18.705812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.705820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.710986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.015 [2024-12-10 00:14:18.711007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.711014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.716154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.015 [2024-12-10 00:14:18.716186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.716194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.721341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.015 [2024-12-10 00:14:18.721361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.721369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.726531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.015 [2024-12-10 00:14:18.726552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.726560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.731707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.015 [2024-12-10 00:14:18.731728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.731736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.736915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.015 [2024-12-10 00:14:18.736936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.736944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.742071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.015 [2024-12-10 00:14:18.742092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.015 [2024-12-10 00:14:18.742100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.015 [2024-12-10 00:14:18.747306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.747328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.747336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.016 [2024-12-10 00:14:18.752599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.752620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.752628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.016 [2024-12-10 00:14:18.757805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.757826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.757834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.016 [2024-12-10 00:14:18.763017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.763038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.763046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.016 [2024-12-10 00:14:18.768225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.768245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.768254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.016 [2024-12-10 00:14:18.773451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.773472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.773480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.016 [2024-12-10 00:14:18.778660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.778681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.778688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.016 [2024-12-10 00:14:18.783811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.783832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.783839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.016 [2024-12-10 00:14:18.789017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.789038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.789046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.016 [2024-12-10 00:14:18.794215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.794236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.794245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.016 [2024-12-10 00:14:18.799420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.799441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.799449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.016 [2024-12-10 00:14:18.804643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.016 [2024-12-10 00:14:18.804665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.016 [2024-12-10 00:14:18.804676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.809813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.809834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.017 [2024-12-10 00:14:18.809842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.815003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.815024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.017 [2024-12-10 00:14:18.815032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.820242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.820263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.017 [2024-12-10 00:14:18.820271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.825459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.825480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.017 [2024-12-10 00:14:18.825488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.830649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.830671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.017 [2024-12-10 00:14:18.830679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.835780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.835801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.017 [2024-12-10 00:14:18.835808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.840953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.840974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.017 [2024-12-10 00:14:18.840981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.846199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.846220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.017 [2024-12-10 00:14:18.846228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.851432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.851457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.017 [2024-12-10 00:14:18.851465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.856680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.856701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.017 [2024-12-10 00:14:18.856709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.861914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.861935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.017 [2024-12-10 00:14:18.861943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.017 [2024-12-10 00:14:18.867090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.017 [2024-12-10 00:14:18.867111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.867119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.018 [2024-12-10 00:14:18.872166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.018 [2024-12-10 00:14:18.872187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.872195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.018 [2024-12-10 00:14:18.877354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.018 [2024-12-10 00:14:18.877375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.877382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.018 [2024-12-10 00:14:18.882531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.018 [2024-12-10 00:14:18.882551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.882559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.018 [2024-12-10 00:14:18.887732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.018 [2024-12-10 00:14:18.887753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.887761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.018 [2024-12-10 00:14:18.892946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.018 [2024-12-10 00:14:18.892968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.892976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.018 [2024-12-10 00:14:18.898167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.018 [2024-12-10 00:14:18.898188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.898195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.018 [2024-12-10 00:14:18.903311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.018 [2024-12-10 00:14:18.903332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.903340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.018 [2024-12-10 00:14:18.908487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.018 [2024-12-10 00:14:18.908509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.908517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.018 [2024-12-10 00:14:18.913653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.018 [2024-12-10 00:14:18.913674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.913682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.018 [2024-12-10 00:14:18.918859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.018 [2024-12-10 00:14:18.918880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.918888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.018 [2024-12-10 00:14:18.924212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.018 [2024-12-10 00:14:18.924233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.018 [2024-12-10 00:14:18.924242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.019 [2024-12-10 00:14:18.929370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.019 [2024-12-10 00:14:18.929391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.019 [2024-12-10 00:14:18.929399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.019 [2024-12-10 00:14:18.934603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.019 [2024-12-10 00:14:18.934623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.019 [2024-12-10 00:14:18.934632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.019 [2024-12-10 00:14:18.939815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.019 [2024-12-10 00:14:18.939837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.019 [2024-12-10 00:14:18.939849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.019 [2024-12-10 00:14:18.945089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.019 [2024-12-10 00:14:18.945111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.019 [2024-12-10 00:14:18.945119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.279 [2024-12-10 00:14:18.950323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.279 [2024-12-10 00:14:18.950345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.279 [2024-12-10 00:14:18.950353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.279 [2024-12-10 00:14:18.955504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.279 [2024-12-10 00:14:18.955526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.279 [2024-12-10 00:14:18.955534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:44.279 [2024-12-10 00:14:18.960687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.279 [2024-12-10 00:14:18.960708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.279 [2024-12-10 00:14:18.960716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.279 [2024-12-10 00:14:18.965799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.279 [2024-12-10 00:14:18.965820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.279 [2024-12-10 00:14:18.965828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:44.279 5811.50 IOPS, 726.44 MiB/s [2024-12-09T23:14:19.215Z] [2024-12-10 00:14:18.972378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bf9c90) 00:32:44.279 [2024-12-10 00:14:18.972399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:44.279 [2024-12-10 00:14:18.972408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:44.279 00:32:44.279 Latency(us) 00:32:44.279 [2024-12-09T23:14:19.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.279 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:44.279 nvme0n1 : 2.00 5810.48 726.31 0.00 0.00 2750.64 701.66 8776.13 00:32:44.279 [2024-12-09T23:14:19.215Z] =================================================================================================================== 00:32:44.279 [2024-12-09T23:14:19.215Z] Total : 5810.48 726.31 0.00 0.00 2750.64 701.66 8776.13 00:32:44.279 { 00:32:44.279 "results": [ 00:32:44.279 { 00:32:44.279 "job": "nvme0n1", 00:32:44.279 "core_mask": "0x2", 00:32:44.279 "workload": "randread", 00:32:44.279 "status": "finished", 00:32:44.279 "queue_depth": 16, 00:32:44.279 "io_size": 131072, 00:32:44.279 "runtime": 2.003106, 00:32:44.279 "iops": 5810.476330259107, 00:32:44.279 "mibps": 726.3095412823884, 00:32:44.279 "io_failed": 0, 00:32:44.279 "io_timeout": 0, 00:32:44.279 "avg_latency_us": 2750.635591732444, 00:32:44.279 "min_latency_us": 701.6626086956521, 00:32:44.279 "max_latency_us": 8776.125217391304 00:32:44.279 } 00:32:44.279 ], 00:32:44.279 "core_count": 1 00:32:44.279 } 00:32:44.279 00:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:44.279 00:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:44.279 00:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:44.279 | .driver_specific 00:32:44.279 | .nvme_error 00:32:44.279 | .status_code 00:32:44.279 | .command_transient_transport_error' 00:32:44.279 00:14:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:44.279 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 376 > 0 )) 00:32:44.279 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 518946 00:32:44.279 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 518946 ']' 00:32:44.279 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 518946 00:32:44.279 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:44.279 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.279 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 518946 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 518946' 00:32:44.538 killing process with pid 518946 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 518946 00:32:44.538 Received shutdown signal, test time was about 2.000000 seconds 00:32:44.538 00:32:44.538 Latency(us) 00:32:44.538 [2024-12-09T23:14:19.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.538 [2024-12-09T23:14:19.474Z] =================================================================================================================== 00:32:44.538 [2024-12-09T23:14:19.474Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 518946 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=519423 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 519423 /var/tmp/bperf.sock 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 519423 ']' 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:44.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:44.538 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:44.538 [2024-12-10 00:14:19.438438] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:44.539 [2024-12-10 00:14:19.438486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519423 ] 00:32:44.798 [2024-12-10 00:14:19.515645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.798 [2024-12-10 00:14:19.553088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.798 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:44.798 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:44.798 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:44.798 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:45.057 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:45.057 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.057 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.057 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.057 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.057 00:14:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.625 nvme0n1 00:32:45.625 00:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:45.625 00:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.625 00:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:45.625 00:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.625 00:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:45.625 00:14:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:45.625 Running I/O for 2 seconds... 00:32:45.625 [2024-12-10 00:14:20.414645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.625 [2024-12-10 00:14:20.414801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.625 [2024-12-10 00:14:20.414831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.625 [2024-12-10 00:14:20.424389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.625 [2024-12-10 00:14:20.424540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.625 [2024-12-10 00:14:20.424565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.625 [2024-12-10 00:14:20.434093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.625 [2024-12-10 00:14:20.434242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.625 [2024-12-10 00:14:20.434262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.625 [2024-12-10 00:14:20.443799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.625 [2024-12-10 00:14:20.443939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.625 [2024-12-10 00:14:20.443957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.625 [2024-12-10 00:14:20.453499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.625 [2024-12-10 00:14:20.453640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.625 [2024-12-10 00:14:20.453659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.625 [2024-12-10 00:14:20.463147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.625 [2024-12-10 00:14:20.463294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.625 [2024-12-10 00:14:20.463314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.625 [2024-12-10 00:14:20.472784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.625 [2024-12-10 00:14:20.472924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.625 [2024-12-10 00:14:20.472943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.625 [2024-12-10 00:14:20.482383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.626 [2024-12-10 00:14:20.482522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.626 [2024-12-10 00:14:20.482541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.626 [2024-12-10 00:14:20.491942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.626 [2024-12-10 00:14:20.492080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.626 [2024-12-10 00:14:20.492098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.626 [2024-12-10 00:14:20.501566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.626 [2024-12-10 00:14:20.501706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.626 [2024-12-10 00:14:20.501724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.626 [2024-12-10 00:14:20.511171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.626 [2024-12-10 00:14:20.511311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.626 [2024-12-10 00:14:20.511330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.626 [2024-12-10 00:14:20.520793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.626 [2024-12-10 00:14:20.520933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.626 [2024-12-10 00:14:20.520952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.626 [2024-12-10 00:14:20.530398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.626 [2024-12-10 00:14:20.530536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.626 [2024-12-10 00:14:20.530554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.626 [2024-12-10 00:14:20.540004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.626 [2024-12-10 00:14:20.540144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.626 [2024-12-10 00:14:20.540165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.626 [2024-12-10 00:14:20.549599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.626 [2024-12-10 00:14:20.549740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.626 [2024-12-10 00:14:20.549759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.559422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.559565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.559584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.569168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.569309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.569329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.578783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.578922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.578941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.588407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.588546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.588568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.598030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.598172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.598191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.607600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.607739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.607757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.617231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.617371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.617390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.626829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.626967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.626986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.636416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.636553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.636572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.646005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.646143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.646170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.655624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.655761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.655781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.665300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.665445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.886 [2024-12-10 00:14:20.665464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.886 [2024-12-10 00:14:20.675086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.886 [2024-12-10 00:14:20.675243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.675261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.684700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.684838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.684858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.694285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.694425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.694444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.703832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.703969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.703987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.713410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.713550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.713569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.723010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.723149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.723172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.732872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.733011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.733030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.742463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.742602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.742621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.752092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.752241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.752260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.761687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.761826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.761845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.771294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.771435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.771453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.780912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.781051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.781070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.790503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.790642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.790661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.800087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.800233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.800252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.809681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:45.887 [2024-12-10 00:14:20.809823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.887 [2024-12-10 00:14:20.809841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:45.887 [2024-12-10 00:14:20.819459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.819605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.819625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.829234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.829375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.829395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.838804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.838944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.838967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.848427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.848564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.848584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.858025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.858168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.858187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.867616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.867758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.867777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.877211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.877352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.877371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.886817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.886957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.886977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.896385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.896528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.896548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.905976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.906116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.906136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.915569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.915709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.915729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.925517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.925663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.925683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.935131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.935283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.935302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.944870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.945014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.945033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.954661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.954805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.954824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.964376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.964516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.964535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.973972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.974112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.974131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.983582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.983722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.983741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:20.993204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:20.993345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:20.993364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:21.002766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:21.002906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:21.002925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:21.012428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.152 [2024-12-10 00:14:21.012567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.152 [2024-12-10 00:14:21.012587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.152 [2024-12-10 00:14:21.022031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.153 [2024-12-10 00:14:21.022175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.153 [2024-12-10 00:14:21.022194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.153 [2024-12-10 00:14:21.031659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.153 [2024-12-10 00:14:21.031797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.153 [2024-12-10 00:14:21.031816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.153 [2024-12-10 00:14:21.041267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.153 [2024-12-10 00:14:21.041407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.153 [2024-12-10 00:14:21.041426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.153 [2024-12-10 00:14:21.050886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.153 [2024-12-10 00:14:21.051025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.153 [2024-12-10 00:14:21.051043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.153 [2024-12-10 00:14:21.060510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.153 [2024-12-10 00:14:21.060653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.153 [2024-12-10 00:14:21.060672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.153 [2024-12-10 00:14:21.070114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.153 [2024-12-10 00:14:21.070264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.153 [2024-12-10 00:14:21.070283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.153 [2024-12-10 00:14:21.079882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.153 [2024-12-10 00:14:21.080024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.153 [2024-12-10 00:14:21.080042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.089749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.089891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.089914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.099475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.099613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.099632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.109088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.109237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.109256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.118710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.118850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.118869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.128321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.128463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.128481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.138009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.138148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.138174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.147633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.147776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.147795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.157227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.157369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.157388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.166824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.166964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.166983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.176569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.176715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.176734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.186188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.186330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.186349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.195797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.195939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.195958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.205402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.205546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.205565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.215025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.215172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.215191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.414 [2024-12-10 00:14:21.224634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.414 [2024-12-10 00:14:21.224775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.414 [2024-12-10 00:14:21.224794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.234219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.234361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.234380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.243843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.243983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.244001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.253456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.253598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.253616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.263060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.263209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.263228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.272699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.272840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.272858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.282301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.282440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.282459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.291916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.292056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.292075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.301538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.301689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.301708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.311152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.311302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.311320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.320770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.320910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.320929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.330384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.330527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.330545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.415 [2024-12-10 00:14:21.340003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.415 [2024-12-10 00:14:21.340145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.415 [2024-12-10 00:14:21.340171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.349858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.350001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.350019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.359665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.359803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.359822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.369292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.369434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.369453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.378889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.379031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.379051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.388497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.388634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.388653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.398087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.398231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.398251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 26447.00 IOPS, 103.31 MiB/s [2024-12-09T23:14:21.611Z] [2024-12-10 00:14:21.407711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.408034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.408054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.417321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.417465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.417487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.427069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.427232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.427253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.436706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.436845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.436865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.446326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.446466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.446485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.455943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.456084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.456105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.465557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.465698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.465717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.475196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.475339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.475358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.484799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.484937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.484956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.494384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.494525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.494544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.503981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.675 [2024-12-10 00:14:21.504121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.675 [2024-12-10 00:14:21.504144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.675 [2024-12-10 00:14:21.513577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.676 [2024-12-10 00:14:21.513718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.676 [2024-12-10 00:14:21.513737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.676 [2024-12-10 00:14:21.523196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.676 [2024-12-10 00:14:21.523336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.676 [2024-12-10 00:14:21.523355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.676 [2024-12-10 00:14:21.532810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.676 [2024-12-10 00:14:21.532949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.676 [2024-12-10 00:14:21.532968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.676 [2024-12-10 00:14:21.542380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.676 [2024-12-10 00:14:21.542520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.676 [2024-12-10 00:14:21.542539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.676 [2024-12-10 00:14:21.551977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.676 [2024-12-10 00:14:21.552118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.676 [2024-12-10 00:14:21.552137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.676 [2024-12-10 00:14:21.561557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.676 [2024-12-10 00:14:21.561698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.676 [2024-12-10 00:14:21.561717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.676 [2024-12-10 00:14:21.571155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.676 [2024-12-10 00:14:21.571306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.676 [2024-12-10 00:14:21.571325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.676 [2024-12-10 00:14:21.580825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.676 [2024-12-10 00:14:21.580967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.676 [2024-12-10 00:14:21.580985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.676 [2024-12-10 00:14:21.590418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.676 [2024-12-10 00:14:21.590562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.676 [2024-12-10 00:14:21.590580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.676 [2024-12-10 00:14:21.600040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.676 [2024-12-10 00:14:21.600185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.676 [2024-12-10 00:14:21.600205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.609867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.610011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.935 [2024-12-10 00:14:21.610030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.619580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.619724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.935 [2024-12-10 00:14:21.619742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.629209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.629349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.935 [2024-12-10 00:14:21.629368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.638817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.638957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.935 [2024-12-10 00:14:21.638976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.648402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.648539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.935 [2024-12-10 00:14:21.648558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.658007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.658148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.935 [2024-12-10 00:14:21.658173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.667619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.667760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.935 [2024-12-10 00:14:21.667780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.677380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.677520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.935 [2024-12-10 00:14:21.677539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.686979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.687118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.935 [2024-12-10 00:14:21.687137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.696621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.696762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.935 [2024-12-10 00:14:21.696781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.706234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.706377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.935 [2024-12-10 00:14:21.706396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.935 [2024-12-10 00:14:21.715842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.935 [2024-12-10 00:14:21.715982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.716001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.725449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.725591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.725609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.735052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.735201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.735219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.744640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.744779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.744798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.754251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.754395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.754416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.763847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.763986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.764005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.773432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.773566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.773584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.782946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.783086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.783105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.792518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.792658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.792677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.802085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.802232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.802251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.811765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.811903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.811922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.821324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.821465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.821484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.830926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.831066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.831085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.840540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.840685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.840704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.850137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.850283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.850302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:46.936 [2024-12-10 00:14:21.859759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:46.936 [2024-12-10 00:14:21.859899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.936 [2024-12-10 00:14:21.859918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.195 [2024-12-10 00:14:21.869542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.195 [2024-12-10 00:14:21.869686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.869706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.879287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.879428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.879446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.888892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.889030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.889049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.898488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.898628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.898647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.908099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.908246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.908265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.917713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.917852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.917871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.927654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.927798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.927817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.937250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.937393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.937411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.946845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.946984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.947004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.956441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.956579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.956598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.966040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.966180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.966199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.975678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.975820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.975839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.985247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.985386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.985405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:21.994833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:21.994975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:21.994993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.004407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.004548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.004569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.014010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.014149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.014173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.023608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.023749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.023767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.033211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.033354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.033373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.042808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.042947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.042966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.052409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.052550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.052569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.062010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.062150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.062174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.071589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.071729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.071747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.081205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.081346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.081364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.090797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.090942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.090960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.100392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.100531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.100550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.109958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.110099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.110118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.196 [2024-12-10 00:14:22.119564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.196 [2024-12-10 00:14:22.119702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.196 [2024-12-10 00:14:22.119720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.129347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.129493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.129512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.139198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.139339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.139357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.148793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.148933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.148952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.158396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.158535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.158554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.167968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.168109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.168128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.177724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.177865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.177884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.187300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.187443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.187462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.196914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.197052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.197071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.206500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.206640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.206658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.216071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.216215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.216234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.225697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.225837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.225856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.456 [2024-12-10 00:14:22.235267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.456 [2024-12-10 00:14:22.235409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.456 [2024-12-10 00:14:22.235428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.244862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.245001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.245019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.254467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.254609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.254631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.264050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.264197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.264216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.273633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.273776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.273794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.283269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.283411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.283429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.292871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.293010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.293028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.302473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.302612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.302630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.312056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.312200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.312219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.321683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.321826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.321845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.331331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.331472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.331490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.340936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.341081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.341100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.350563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.350703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.350722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.360277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.360419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.360438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.370141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.370293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.370312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.457 [2024-12-10 00:14:22.380043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.457 [2024-12-10 00:14:22.380195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.457 [2024-12-10 00:14:22.380214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.716 [2024-12-10 00:14:22.389895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.716 [2024-12-10 00:14:22.390041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.716 [2024-12-10 00:14:22.390061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.716 [2024-12-10 00:14:22.399792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.716 [2024-12-10 00:14:22.399939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.716 [2024-12-10 00:14:22.399958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.716 [2024-12-10 00:14:22.409413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dbd90) with pdu=0x200016efef90 00:32:47.716 26495.50 IOPS, 103.50 MiB/s [2024-12-09T23:14:22.652Z] [2024-12-10 00:14:22.410123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:47.716 [2024-12-10 00:14:22.410142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:47.716 00:32:47.716 Latency(us) 00:32:47.716 [2024-12-09T23:14:22.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.716 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.716 nvme0n1 : 2.01 26496.24 103.50 0.00 0.00 4822.41 2678.43 10029.86 00:32:47.716 [2024-12-09T23:14:22.652Z] =================================================================================================================== 00:32:47.716 [2024-12-09T23:14:22.652Z] Total : 26496.24 103.50 0.00 0.00 4822.41 2678.43 10029.86 00:32:47.716 { 00:32:47.716 "results": [ 00:32:47.716 { 00:32:47.716 "job": "nvme0n1", 00:32:47.716 "core_mask": "0x2", 00:32:47.716 "workload": "randwrite", 00:32:47.716 "status": "finished", 00:32:47.716 "queue_depth": 128, 00:32:47.716 "io_size": 4096, 00:32:47.716 "runtime": 2.005983, 00:32:47.716 "iops": 26496.236508484868, 00:32:47.716 "mibps": 103.50092386126902, 00:32:47.716 "io_failed": 0, 00:32:47.716 "io_timeout": 0, 00:32:47.716 "avg_latency_us": 4822.414789169168, 00:32:47.716 "min_latency_us": 2678.4278260869564, 00:32:47.716 "max_latency_us": 10029.857391304347 00:32:47.716 } 00:32:47.716 ], 00:32:47.716 "core_count": 1 00:32:47.716 } 00:32:47.716 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:47.716 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:47.716 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:47.716 | .driver_specific 00:32:47.716 | .nvme_error 00:32:47.716 | .status_code 00:32:47.716 | .command_transient_transport_error' 00:32:47.716 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:47.716 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:32:47.716 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 519423 00:32:47.717 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 519423 ']' 00:32:47.717 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 519423 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 519423 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 519423' 00:32:47.976 killing process with pid 519423 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 519423 00:32:47.976 Received shutdown signal, test time was about 2.000000 seconds 00:32:47.976 00:32:47.976 Latency(us) 00:32:47.976 [2024-12-09T23:14:22.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.976 [2024-12-09T23:14:22.912Z] =================================================================================================================== 00:32:47.976 [2024-12-09T23:14:22.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 519423 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=520108 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 520108 /var/tmp/bperf.sock 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 520108 ']' 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:47.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.976 00:14:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:48.235 [2024-12-10 00:14:22.912924] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:48.235 [2024-12-10 00:14:22.912971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520108 ] 00:32:48.235 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:48.235 Zero copy mechanism will not be used. 00:32:48.235 [2024-12-10 00:14:22.985640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.235 [2024-12-10 00:14:23.026729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.235 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:48.235 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:32:48.235 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:48.235 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:48.493 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:48.493 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.493 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:48.493 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.493 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:48.494 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.062 nvme0n1 00:32:49.062 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:49.062 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.062 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:49.062 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.062 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:49.062 00:14:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:49.062 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:49.062 Zero copy mechanism will not be used. 00:32:49.062 Running I/O for 2 seconds... 00:32:49.062 [2024-12-10 00:14:23.857801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.857894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.857926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.863919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.863983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.864012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.868695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.868760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.868784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.873805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.873882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.873905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.879176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.879496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.879518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.885287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.885529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.885550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.890450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.890724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.890745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.896150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.896434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.896456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.902266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.902535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.902556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.908297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.908600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.908622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.913655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.913933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.913954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.918826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.919093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.919115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.923719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.923993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.924016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.928250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.928505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.928526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.932660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.932924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.932946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.936946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.062 [2024-12-10 00:14:23.937222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.062 [2024-12-10 00:14:23.937243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.062 [2024-12-10 00:14:23.941182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.941446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.941467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.945418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.945682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.945702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.949629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.949906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.949926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.953871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.954152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.954180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.958146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.958418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.958438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.962373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.962640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.962661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.966600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.966872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.966893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.970780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.971048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.971069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.974939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.975211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.975231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.979090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.979370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.979396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.983120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.983358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.983379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.987119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.987357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.987377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.991096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.991339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.991361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.063 [2024-12-10 00:14:23.995134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.063 [2024-12-10 00:14:23.995376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.063 [2024-12-10 00:14:23.995398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:23.999201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:23.999430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:23.999450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.003198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.003437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.003457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.007142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.007383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.007404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.011066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.011302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.011322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.015021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.015267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.015289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.018862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.019090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.019111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.022604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.022807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.022828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.026345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.026561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.026581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.030076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.030310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.030330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.033787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.034003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.034023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.037507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.037714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.037734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.041228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.041442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.041462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.044954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.045194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.045214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.048689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.048873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.048893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.052437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.052633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.052653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.056206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.056364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.056384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.323 [2024-12-10 00:14:24.059929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.323 [2024-12-10 00:14:24.060121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.323 [2024-12-10 00:14:24.060141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.063703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.063895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.063915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.067430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.067627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.067648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.071174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.071365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.071385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.074896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.075088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.075108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.078986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.079191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.079215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.082806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.082993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.083013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.086721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.086896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.086916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.090627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.090798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.090817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.094465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.094653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.094673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.098438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.098620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.098641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.102552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.102713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.102733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.106558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.106687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.106707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.110628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.110776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.110796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.114634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.114752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.114772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.118556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.118686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.118707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.122574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.122690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.122710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.126456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.126567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.126586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.130380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.130511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.130531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.134093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.134230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.134250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.138362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.138533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.138554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.143670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.143872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.143892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.149297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.149519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.149539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.154381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.154595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.154616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.159453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.159690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.159710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.165077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.165299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.165320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.170275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.170491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.170512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.175416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.175668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.324 [2024-12-10 00:14:24.175688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.324 [2024-12-10 00:14:24.180550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.324 [2024-12-10 00:14:24.180776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.180797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.185750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.185950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.185971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.190857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.191052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.191072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.195997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.196147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.196178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.201272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.201480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.201500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.206670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.206852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.206872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.212294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.212458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.212478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.217419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.217628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.217648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.222643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.222873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.222893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.227815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.228061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.228083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.233022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.233179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.233199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.237041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.237164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.237184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.241310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.241414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.241438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.245486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.245611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.245631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.249708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.249828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.249848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.325 [2024-12-10 00:14:24.253703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.325 [2024-12-10 00:14:24.253811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.325 [2024-12-10 00:14:24.253831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.258048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.258196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.258217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.263574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.263716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.263737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.269037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.269170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.269190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.274403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.274558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.274578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.279662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.279849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.279869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.285232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.285445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.285465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.290402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.290561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.290581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.295885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.296010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.296030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.301341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.301441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.301460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.307070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.307214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.307234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.312550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.312705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.312726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.318124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.318209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.318231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.322429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.322492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.322516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.326268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.326327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.326354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.330083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.330168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.330188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.333852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.333933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.333952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.337620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.337683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.337706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.341344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.341424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.341443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.345091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.345165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.345187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.348822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.348878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.348900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.352865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.352961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.352980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.357472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.357631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.357651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.362430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.362512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.362538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.367408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.367468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.367491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.372043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.372119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.372142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.376667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.376740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.376761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.381455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.586 [2024-12-10 00:14:24.381516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.586 [2024-12-10 00:14:24.381539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.586 [2024-12-10 00:14:24.385900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.386014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.386034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.390594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.390690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.390710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.395616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.395705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.395727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.400630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.400784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.400802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.405518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.405586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.405609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.409956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.410041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.410064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.414378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.414448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.414470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.419057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.419114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.419135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.423326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.423402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.423423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.428032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.428096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.428117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.432558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.432630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.432652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.436913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.436987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.437008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.441710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.441875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.441899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.446101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.446176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.446199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.451373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.451445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.451467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.455619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.455685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.455707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.459649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.459713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.459736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.463557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.463619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.463642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.468088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.468198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.468218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.473667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.473854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.473875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.479299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.479396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.479416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.485184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.485289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.485313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.490874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.490969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.490988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.496468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.496602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.496622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.502489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.502618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.502637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.508520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.508627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.508646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.587 [2024-12-10 00:14:24.514514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.587 [2024-12-10 00:14:24.514628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.587 [2024-12-10 00:14:24.514648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.848 [2024-12-10 00:14:24.520770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.520879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.520899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.526268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.526394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.526413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.530843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.530926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.530945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.535677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.535743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.535766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.539653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.539709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.539731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.543619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.543682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.543705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.547527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.547594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.547616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.551534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.551597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.551620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.555577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.555638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.555660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.559622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.559682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.559705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.563588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.563651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.563675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.567475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.567544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.567571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.571755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.571830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.571863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.576311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.576372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.576395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.580353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.580418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.580441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.584434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.584501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.584526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.588390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.588450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.588474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.592310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.592377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.592400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.596250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.596309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.596331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.600252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.600327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.600350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.604177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.604253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.604279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.608072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.608128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.608150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.612137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.612210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.612233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.615955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.616015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.616038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.619988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.620051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.620074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.624793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.624856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.624878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.629271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.849 [2024-12-10 00:14:24.629350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.849 [2024-12-10 00:14:24.629373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.849 [2024-12-10 00:14:24.633514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.633584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.633606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.637352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.637417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.637439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.641222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.641293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.641315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.645016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.645080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.645102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.648923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.649004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.649024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.653191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.653256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.653278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.657583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.657667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.657687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.661614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.661683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.661706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.665496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.665553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.665577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.669379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.669485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.669505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.673304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.673384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.673408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.677168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.677234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.677257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.681042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.681129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.681148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.684978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.685055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.685075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.688887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.688964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.688984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.692897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.692971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.692993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.697046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.697116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.697138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.701046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.701116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.701138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.705089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.705170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.705191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.709058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.709120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.709147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.712959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.713074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.713094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.716924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.717001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.717021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.720785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.720873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.720893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.724655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.724732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.724752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.728549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.728634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.728653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.732445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.732517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.732538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.736381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.736444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.736467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.850 [2024-12-10 00:14:24.740199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.850 [2024-12-10 00:14:24.740267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.850 [2024-12-10 00:14:24.740289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.851 [2024-12-10 00:14:24.744174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.851 [2024-12-10 00:14:24.744246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.851 [2024-12-10 00:14:24.744267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.851 [2024-12-10 00:14:24.748010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.851 [2024-12-10 00:14:24.748073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.851 [2024-12-10 00:14:24.748095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.851 [2024-12-10 00:14:24.751910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.851 [2024-12-10 00:14:24.751979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.851 [2024-12-10 00:14:24.752001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.851 [2024-12-10 00:14:24.755856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.851 [2024-12-10 00:14:24.755923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.851 [2024-12-10 00:14:24.755945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.851 [2024-12-10 00:14:24.759768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.851 [2024-12-10 00:14:24.759843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.851 [2024-12-10 00:14:24.759864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.851 [2024-12-10 00:14:24.763994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.851 [2024-12-10 00:14:24.764070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.851 [2024-12-10 00:14:24.764091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:49.851 [2024-12-10 00:14:24.768308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.851 [2024-12-10 00:14:24.768364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.851 [2024-12-10 00:14:24.768386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:49.851 [2024-12-10 00:14:24.772318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.851 [2024-12-10 00:14:24.772388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.851 [2024-12-10 00:14:24.772409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:49.851 [2024-12-10 00:14:24.776241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.851 [2024-12-10 00:14:24.776358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.851 [2024-12-10 00:14:24.776378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:49.851 [2024-12-10 00:14:24.780181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:49.851 [2024-12-10 00:14:24.780263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:49.851 [2024-12-10 00:14:24.780284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.111 [2024-12-10 00:14:24.784233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.111 [2024-12-10 00:14:24.784326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.111 [2024-12-10 00:14:24.784345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.111 [2024-12-10 00:14:24.788273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.111 [2024-12-10 00:14:24.788340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.111 [2024-12-10 00:14:24.788364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.111 [2024-12-10 00:14:24.792329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.111 [2024-12-10 00:14:24.792405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.111 [2024-12-10 00:14:24.792426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.111 [2024-12-10 00:14:24.796245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.111 [2024-12-10 00:14:24.796318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.111 [2024-12-10 00:14:24.796338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.111 [2024-12-10 00:14:24.800047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.111 [2024-12-10 00:14:24.800124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.111 [2024-12-10 00:14:24.800144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.111 [2024-12-10 00:14:24.803970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.111 [2024-12-10 00:14:24.804046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.111 [2024-12-10 00:14:24.804066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.111 [2024-12-10 00:14:24.807874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.111 [2024-12-10 00:14:24.807929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.111 [2024-12-10 00:14:24.807950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.111 [2024-12-10 00:14:24.811785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.111 [2024-12-10 00:14:24.811858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.811882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.815706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.815808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.815827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.820010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.820175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.820194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.825334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.825467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.825487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.830774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.830877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.830897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.836612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.836690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.836711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.843336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.843449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.843469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.848194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.848250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.848272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.853231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.853303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.853325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.857264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.857343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.857364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.112 6987.00 IOPS, 873.38 MiB/s [2024-12-09T23:14:25.048Z] [2024-12-10 00:14:24.862376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.862576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.862598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.866201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.866393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.866414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.870058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.870248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.870269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.873942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.874129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.874150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.877774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.877969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.877989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.881571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.881752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.881773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.885328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.885513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.885533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.889068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.889258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.889282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.892794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.892985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.893005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.896693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.896884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.896903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.900964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.901182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.901203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.906129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.906349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.906370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.911757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.911955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.911975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.916836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.917070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.917090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.922051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.922352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.922374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.927127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.927421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.927442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.932320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.932586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.932607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.112 [2024-12-10 00:14:24.937548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.112 [2024-12-10 00:14:24.937799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.112 [2024-12-10 00:14:24.937819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.943091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.943339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.943360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.948074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.948228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.948249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.953563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.953726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.953746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.958674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.958926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.958947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.964284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.964540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.964561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.969369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.969595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.969616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.974618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.974820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.974841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.979716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.979903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.979924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.985005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.985179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.985200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.990311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.990449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.990469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.994469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.994631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.994651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:24.998545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:24.998704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:24.998724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:25.002621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:25.002787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:25.002808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:25.006668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:25.006844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:25.006865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:25.010777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:25.010920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:25.010940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:25.014714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:25.014860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:25.014884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:25.018581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:25.018738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:25.018758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:25.022528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:25.022676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:25.022696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:25.026437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:25.026619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:25.026640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:25.030244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:25.030388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:25.030409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:25.034174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:25.034433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:25.034455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.113 [2024-12-10 00:14:25.039238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.113 [2024-12-10 00:14:25.039438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.113 [2024-12-10 00:14:25.039459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.044487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.044763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.044785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.049554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.049718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.049738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.054580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.054758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.054779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.059702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.059851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.059871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.065067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.065262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.065282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.070233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.070388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.070409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.075327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.075472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.075492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.080432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.080676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.080697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.085626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.085810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.085831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.091597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.091767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.091788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.096481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.096638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.096659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.100765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.100936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.100958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.104816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.105007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.105028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.108880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.109077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.109099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.113019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.113215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.113237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.117515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.117676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.117698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.121399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.121562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.121584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.125274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.373 [2024-12-10 00:14:25.125437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.373 [2024-12-10 00:14:25.125458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.373 [2024-12-10 00:14:25.129095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.129259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.129281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.132919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.133074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.133099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.136694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.136852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.136873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.140481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.140640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.140660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.144268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.144440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.144461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.148051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.148217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.148239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.151808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.151964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.151985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.155731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.155887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.155907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.159587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.159735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.159755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.163614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.163763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.163783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.168402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.168560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.168580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.172510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.172667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.172686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.176451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.176603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.176623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.180275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.180432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.180452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.184305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.184452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.184471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.188387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.188542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.188562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.192296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.192445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.192465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.196147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.196312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.196332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.200126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.200281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.200301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.204081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.204242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.204263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.207978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.208129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.208149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.211991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.212145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.212173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.215981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.216137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.216163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.219913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.220068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.220088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.223904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.224052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.224072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.227877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.228034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.228055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.231789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.231945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.231965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.374 [2024-12-10 00:14:25.235737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.374 [2024-12-10 00:14:25.235893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.374 [2024-12-10 00:14:25.235917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.239619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.239767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.239787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.243579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.243736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.243755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.247477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.247630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.247650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.251487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.251637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.251658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.255361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.255521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.255540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.259268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.259424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.259443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.263214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.263375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.263395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.267184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.267341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.267361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.271095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.271262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.271283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.274976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.275132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.275152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.279553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.279705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.279724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.283981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.284136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.284156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.287880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.288031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.288051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.291668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.291824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.291843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.295526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.295677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.295697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.299411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.299560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.299580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.375 [2024-12-10 00:14:25.304218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.375 [2024-12-10 00:14:25.304375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.375 [2024-12-10 00:14:25.304395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.308742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.308897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.308918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.312749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.312904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.312925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.316731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.316882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.316901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.320601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.320755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.320776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.324501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.324653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.324673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.328512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.328669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.328689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.332374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.332526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.332545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.336321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.336472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.336491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.340146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.340305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.340328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.344088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.344253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.344273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.348012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.348173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.348193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.351887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.352038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.352058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.355835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.355987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.356007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.359949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.360106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.360127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.363908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.364061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.364080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.367879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.368035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.368054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.371776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.371926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.371947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.375681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.375841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.375862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.379546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.379696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.379717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.383501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.383661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.383681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.387482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.387700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.387720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.391545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.391706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.391726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.395512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.636 [2024-12-10 00:14:25.395664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.636 [2024-12-10 00:14:25.395684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.636 [2024-12-10 00:14:25.399459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.399614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.399634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.403382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.403538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.403558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.407259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.407412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.407433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.411195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.411349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.411369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.415092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.415256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.415276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.419172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.419326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.419346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.423083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.423243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.423262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.427041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.427197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.427217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.430929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.431085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.431106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.434944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.435108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.435128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.438918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.439081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.439103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.443078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.443250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.443273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.447068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.447232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.447252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.451034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.451201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.451221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.454957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.455076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.455095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.459012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.459169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.459190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.462998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.463164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.463185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.466984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.467139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.467166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.470967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.471126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.471146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.475005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.475165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.475186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.478992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.479173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.479193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.483708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.483882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.483903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.489437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.489704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.489724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.494523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.494724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.494744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.500600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.500758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.500778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.505632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.505803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.505823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.512278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.512428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.512448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.637 [2024-12-10 00:14:25.518204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.637 [2024-12-10 00:14:25.518422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.637 [2024-12-10 00:14:25.518442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.523843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.524002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.524023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.529298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.529456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.529476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.533327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.533483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.533503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.537290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.537440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.537460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.541208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.541361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.541381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.545089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.545251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.545272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.548936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.549094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.549114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.552747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.552900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.552920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.556591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.556741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.556761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.560398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.560556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.560581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.564275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.564433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.564453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.638 [2024-12-10 00:14:25.568155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.638 [2024-12-10 00:14:25.568320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.638 [2024-12-10 00:14:25.568340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.572027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.572187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.572206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.575960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.576121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.576141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.579793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.579942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.579962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.583622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.583776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.583795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.587436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.587590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.587609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.591254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.591411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.591432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.595101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.595264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.595284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.599613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.599771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.599791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.604003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.604171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.604191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.608005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.608167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.608186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.611966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.612120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.612139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.615815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.615971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.615990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.619785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.619938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.619958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.624593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.907 [2024-12-10 00:14:25.624750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.907 [2024-12-10 00:14:25.624770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.907 [2024-12-10 00:14:25.629148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.629318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.629337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.633235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.633387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.633407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.637263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.637419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.637439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.641310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.641461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.641481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.645317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.645472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.645492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.649242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.649402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.649422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.653217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.653370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.653390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.657186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.657341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.657361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.661101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.661261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.661281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.665132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.665299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.665322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.669395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.669544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.669564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.673428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.673579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.673598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.677477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.677650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.677669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.681447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.681598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.681619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.685410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.685564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.685585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.689348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.689497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.689517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.693197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.693357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.693377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.697173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.697324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.697343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.701495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.701649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.701668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.706233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.706385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.706405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.710233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.710384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.710404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.714180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.714340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.714359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.718113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.718268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.718288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.722258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.722414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.722434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.726225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.726384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.726404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.730063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.730221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.730241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.733914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.734068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.734087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.737766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.737917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.737937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.908 [2024-12-10 00:14:25.742017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.908 [2024-12-10 00:14:25.742172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.908 [2024-12-10 00:14:25.742192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.746798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.746961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.746981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.751218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.751371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.751391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.755212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.755366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.755386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.759094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.759255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.759275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.763009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.763168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.763187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.766977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.767130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.767149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.770871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.771028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.771051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.774747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.774900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.774919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.778859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.779008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.779028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.783619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.783772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.783791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.787752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.787909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.787928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.791780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.791934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.791954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.795744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.795901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.795920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.799628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.799780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.799800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.803431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.803582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.803601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.807382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.807543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.807563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.811755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.811919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.811939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.816372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.816526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.816545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.820402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.820553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.820573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.824354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.909 [2024-12-10 00:14:25.824507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.909 [2024-12-10 00:14:25.824526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:50.909 [2024-12-10 00:14:25.828290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.910 [2024-12-10 00:14:25.828442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.910 [2024-12-10 00:14:25.828462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:50.910 [2024-12-10 00:14:25.832293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.910 [2024-12-10 00:14:25.832449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.910 [2024-12-10 00:14:25.832469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:50.910 [2024-12-10 00:14:25.836264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:50.910 [2024-12-10 00:14:25.836418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:50.910 [2024-12-10 00:14:25.836438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.173 [2024-12-10 00:14:25.840173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:51.173 [2024-12-10 00:14:25.840337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.173 [2024-12-10 00:14:25.840357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.173 [2024-12-10 00:14:25.844196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:51.173 [2024-12-10 00:14:25.844351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.173 [2024-12-10 00:14:25.844371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.173 [2024-12-10 00:14:25.848127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:51.173 [2024-12-10 00:14:25.848294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.173 [2024-12-10 00:14:25.848314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:51.173 [2024-12-10 00:14:25.852172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:51.173 [2024-12-10 00:14:25.852324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.173 [2024-12-10 00:14:25.852343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:51.173 [2024-12-10 00:14:25.856965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:51.173 [2024-12-10 00:14:25.857118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.173 [2024-12-10 00:14:25.857138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:51.173 [2024-12-10 00:14:25.861334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12dc0d0) with pdu=0x200016eff3c8 00:32:51.173 [2024-12-10 00:14:25.862659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:51.173 [2024-12-10 00:14:25.862679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:51.173 7176.50 IOPS, 897.06 MiB/s 00:32:51.173 Latency(us) 00:32:51.173 [2024-12-09T23:14:26.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.173 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:51.173 nvme0n1 : 2.00 7172.87 896.61 0.00 0.00 2226.38 1659.77 7094.98 00:32:51.173 [2024-12-09T23:14:26.109Z] =================================================================================================================== 00:32:51.173 [2024-12-09T23:14:26.109Z] Total : 7172.87 896.61 0.00 0.00 2226.38 1659.77 7094.98 00:32:51.173 { 00:32:51.173 "results": [ 00:32:51.173 { 00:32:51.173 "job": "nvme0n1", 00:32:51.173 "core_mask": "0x2", 00:32:51.173 "workload": "randwrite", 00:32:51.173 "status": "finished", 00:32:51.173 "queue_depth": 16, 00:32:51.173 "io_size": 131072, 00:32:51.173 "runtime": 2.003243, 00:32:51.173 "iops": 7172.869192604192, 00:32:51.173 "mibps": 896.608649075524, 00:32:51.173 "io_failed": 0, 00:32:51.173 "io_timeout": 0, 00:32:51.173 "avg_latency_us": 2226.375237028991, 00:32:51.173 "min_latency_us": 1659.7704347826086, 00:32:51.173 "max_latency_us": 7094.984347826087 00:32:51.173 } 00:32:51.173 ], 00:32:51.173 "core_count": 1 00:32:51.173 } 00:32:51.173 00:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:51.173 00:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:51.173 00:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:51.173 | .driver_specific 00:32:51.173 | .nvme_error 00:32:51.173 | .status_code 00:32:51.173 | .command_transient_transport_error' 00:32:51.173 00:14:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:51.173 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 464 > 0 )) 00:32:51.173 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 520108 00:32:51.173 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 520108 ']' 00:32:51.173 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 520108 00:32:51.173 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:51.173 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.173 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 520108 00:32:51.433 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:51.433 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:51.433 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 520108' 00:32:51.433 killing process with pid 520108 00:32:51.433 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 520108 00:32:51.433 Received shutdown signal, test time was about 2.000000 seconds 00:32:51.433 00:32:51.433 Latency(us) 00:32:51.433 [2024-12-09T23:14:26.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.433 [2024-12-09T23:14:26.369Z] =================================================================================================================== 00:32:51.433 [2024-12-09T23:14:26.370Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 520108 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 518236 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 518236 ']' 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 518236 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 518236 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 518236' 00:32:51.434 killing process with pid 518236 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 518236 00:32:51.434 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 518236 00:32:51.692 00:32:51.692 real 0m14.234s 00:32:51.692 user 0m27.165s 00:32:51.692 sys 0m4.791s 00:32:51.692 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:51.692 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:51.692 ************************************ 00:32:51.692 END TEST nvmf_digest_error 00:32:51.692 ************************************ 00:32:51.692 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:32:51.692 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:32:51.692 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:51.692 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:32:51.692 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:51.692 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:32:51.692 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:51.692 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:51.692 rmmod nvme_tcp 00:32:51.692 rmmod nvme_fabrics 00:32:51.692 rmmod nvme_keyring 00:32:51.692 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 518236 ']' 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 518236 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 518236 ']' 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 518236 00:32:51.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (518236) - No such process 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 518236 is not found' 00:32:51.951 Process with pid 518236 is not found 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.951 00:14:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.857 00:14:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:53.857 00:32:53.857 real 0m36.748s 00:32:53.857 user 0m55.962s 00:32:53.857 sys 0m13.899s 00:32:53.857 00:14:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:53.857 00:14:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:53.857 ************************************ 00:32:53.857 END TEST nvmf_digest 00:32:53.857 ************************************ 00:32:53.857 00:14:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:32:53.857 00:14:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:32:53.857 00:14:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:32:53.857 00:14:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:53.857 00:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:53.857 00:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:53.857 00:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.857 ************************************ 00:32:53.857 START TEST nvmf_bdevperf 00:32:53.857 ************************************ 00:32:53.857 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:54.117 * Looking for test storage... 00:32:54.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:54.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.117 --rc genhtml_branch_coverage=1 00:32:54.117 --rc genhtml_function_coverage=1 00:32:54.117 --rc genhtml_legend=1 00:32:54.117 --rc geninfo_all_blocks=1 00:32:54.117 --rc geninfo_unexecuted_blocks=1 00:32:54.117 00:32:54.117 ' 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:54.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.117 --rc genhtml_branch_coverage=1 00:32:54.117 --rc genhtml_function_coverage=1 00:32:54.117 --rc genhtml_legend=1 00:32:54.117 --rc geninfo_all_blocks=1 00:32:54.117 --rc geninfo_unexecuted_blocks=1 00:32:54.117 00:32:54.117 ' 00:32:54.117 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:54.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.117 --rc genhtml_branch_coverage=1 00:32:54.117 --rc genhtml_function_coverage=1 00:32:54.117 --rc genhtml_legend=1 00:32:54.117 --rc geninfo_all_blocks=1 00:32:54.117 --rc geninfo_unexecuted_blocks=1 00:32:54.117 00:32:54.118 ' 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:54.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:54.118 --rc genhtml_branch_coverage=1 00:32:54.118 --rc genhtml_function_coverage=1 00:32:54.118 --rc genhtml_legend=1 00:32:54.118 --rc geninfo_all_blocks=1 00:32:54.118 --rc geninfo_unexecuted_blocks=1 00:32:54.118 00:32:54.118 ' 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:54.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.118 00:14:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.118 00:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.118 00:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:54.118 00:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:54.118 00:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:32:54.118 00:14:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:00.690 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:00.691 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:00.691 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:00.691 Found net devices under 0000:86:00.0: cvl_0_0 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:00.691 Found net devices under 0000:86:00.1: cvl_0_1 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:00.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:00.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:33:00.691 00:33:00.691 --- 10.0.0.2 ping statistics --- 00:33:00.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.691 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:00.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:00.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:33:00.691 00:33:00.691 --- 10.0.0.1 ping statistics --- 00:33:00.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:00.691 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=524119 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 524119 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 524119 ']' 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:00.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:00.691 00:14:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.691 [2024-12-10 00:14:34.904676] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:33:00.691 [2024-12-10 00:14:34.904725] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:00.691 [2024-12-10 00:14:34.982911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:00.691 [2024-12-10 00:14:35.023674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:00.691 [2024-12-10 00:14:35.023710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:00.691 [2024-12-10 00:14:35.023717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:00.691 [2024-12-10 00:14:35.023723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:00.691 [2024-12-10 00:14:35.023728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:00.691 [2024-12-10 00:14:35.025101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.691 [2024-12-10 00:14:35.025210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:00.691 [2024-12-10 00:14:35.025209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.691 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.692 [2024-12-10 00:14:35.170149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.692 Malloc0 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:00.692 [2024-12-10 00:14:35.240490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.692 { 00:33:00.692 "params": { 00:33:00.692 "name": "Nvme$subsystem", 00:33:00.692 "trtype": "$TEST_TRANSPORT", 00:33:00.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.692 "adrfam": "ipv4", 00:33:00.692 "trsvcid": "$NVMF_PORT", 00:33:00.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.692 "hdgst": ${hdgst:-false}, 00:33:00.692 "ddgst": ${ddgst:-false} 00:33:00.692 }, 00:33:00.692 "method": "bdev_nvme_attach_controller" 00:33:00.692 } 00:33:00.692 EOF 00:33:00.692 )") 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:00.692 00:14:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.692 "params": { 00:33:00.692 "name": "Nvme1", 00:33:00.692 "trtype": "tcp", 00:33:00.692 "traddr": "10.0.0.2", 00:33:00.692 "adrfam": "ipv4", 00:33:00.692 "trsvcid": "4420", 00:33:00.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.692 "hdgst": false, 00:33:00.692 "ddgst": false 00:33:00.692 }, 00:33:00.692 "method": "bdev_nvme_attach_controller" 00:33:00.692 }' 00:33:00.692 [2024-12-10 00:14:35.290792] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:33:00.692 [2024-12-10 00:14:35.290833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524145 ] 00:33:00.692 [2024-12-10 00:14:35.365559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.692 [2024-12-10 00:14:35.405843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.692 Running I/O for 1 seconds... 00:33:02.068 11190.00 IOPS, 43.71 MiB/s 00:33:02.068 Latency(us) 00:33:02.068 [2024-12-09T23:14:37.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.068 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:02.068 Verification LBA range: start 0x0 length 0x4000 00:33:02.068 Nvme1n1 : 1.01 11186.66 43.70 0.00 0.00 11395.23 2564.45 11112.63 00:33:02.068 [2024-12-09T23:14:37.004Z] =================================================================================================================== 00:33:02.068 [2024-12-09T23:14:37.004Z] Total : 11186.66 43.70 0.00 0.00 11395.23 2564.45 11112.63 00:33:02.068 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=524382 00:33:02.068 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:02.068 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:02.068 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:02.068 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:33:02.068 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:33:02.068 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:02.068 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:02.068 { 00:33:02.068 "params": { 00:33:02.068 "name": "Nvme$subsystem", 00:33:02.068 "trtype": "$TEST_TRANSPORT", 00:33:02.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:02.069 "adrfam": "ipv4", 00:33:02.069 "trsvcid": "$NVMF_PORT", 00:33:02.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:02.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:02.069 "hdgst": ${hdgst:-false}, 00:33:02.069 "ddgst": ${ddgst:-false} 00:33:02.069 }, 00:33:02.069 "method": "bdev_nvme_attach_controller" 00:33:02.069 } 00:33:02.069 EOF 00:33:02.069 )") 00:33:02.069 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:33:02.069 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:33:02.069 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:33:02.069 00:14:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:02.069 "params": { 00:33:02.069 "name": "Nvme1", 00:33:02.069 "trtype": "tcp", 00:33:02.069 "traddr": "10.0.0.2", 00:33:02.069 "adrfam": "ipv4", 00:33:02.069 "trsvcid": "4420", 00:33:02.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:02.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:02.069 "hdgst": false, 00:33:02.069 "ddgst": false 00:33:02.069 }, 00:33:02.069 "method": "bdev_nvme_attach_controller" 00:33:02.069 }' 00:33:02.069 [2024-12-10 00:14:36.781528] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:33:02.069 [2024-12-10 00:14:36.781574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524382 ] 00:33:02.069 [2024-12-10 00:14:36.858425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.069 [2024-12-10 00:14:36.896166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.327 Running I/O for 15 seconds... 00:33:04.638 10988.00 IOPS, 42.92 MiB/s [2024-12-09T23:14:39.841Z] 11101.00 IOPS, 43.36 MiB/s [2024-12-09T23:14:39.842Z] 00:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 524119 00:33:04.906 00:14:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:04.906 [2024-12-10 00:14:39.759421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.759981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.759989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.906 [2024-12-10 00:14:39.760337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.906 [2024-12-10 00:14:39.760351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.906 [2024-12-10 00:14:39.760369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.906 [2024-12-10 00:14:39.760385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.906 [2024-12-10 00:14:39.760400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.906 [2024-12-10 00:14:39.760415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.906 [2024-12-10 00:14:39.760431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.906 [2024-12-10 00:14:39.760445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.906 [2024-12-10 00:14:39.760453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.907 [2024-12-10 00:14:39.760587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.907 [2024-12-10 00:14:39.760602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.907 [2024-12-10 00:14:39.760617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.907 [2024-12-10 00:14:39.760631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.907 [2024-12-10 00:14:39.760646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.760986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.760994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.907 [2024-12-10 00:14:39.761299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.907 [2024-12-10 00:14:39.761305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.908 [2024-12-10 00:14:39.761464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:04.908 [2024-12-10 00:14:39.761569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.761576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdad410 is same with the state(6) to be set 00:33:04.908 [2024-12-10 00:14:39.761585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:04.908 [2024-12-10 00:14:39.761590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:04.908 [2024-12-10 00:14:39.761596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96880 len:8 PRP1 0x0 PRP2 0x0 00:33:04.908 [2024-12-10 00:14:39.761603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:04.908 [2024-12-10 00:14:39.764529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:04.908 [2024-12-10 00:14:39.764583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:04.908 [2024-12-10 00:14:39.765203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.908 [2024-12-10 00:14:39.765220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:04.908 [2024-12-10 00:14:39.765228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:04.908 [2024-12-10 00:14:39.765402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:04.908 [2024-12-10 00:14:39.765576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:04.908 [2024-12-10 00:14:39.765584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:04.908 [2024-12-10 00:14:39.765591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:04.908 [2024-12-10 00:14:39.765599] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:04.908 [2024-12-10 00:14:39.777861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:04.908 [2024-12-10 00:14:39.778229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.908 [2024-12-10 00:14:39.778252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:04.908 [2024-12-10 00:14:39.778260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:04.908 [2024-12-10 00:14:39.778434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:04.908 [2024-12-10 00:14:39.778607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:04.908 [2024-12-10 00:14:39.778616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:04.908 [2024-12-10 00:14:39.778624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:04.908 [2024-12-10 00:14:39.778631] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:04.908 [2024-12-10 00:14:39.790795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:04.908 [2024-12-10 00:14:39.791178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.908 [2024-12-10 00:14:39.791195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:04.908 [2024-12-10 00:14:39.791202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:04.908 [2024-12-10 00:14:39.791376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:04.908 [2024-12-10 00:14:39.791549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:04.908 [2024-12-10 00:14:39.791557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:04.908 [2024-12-10 00:14:39.791563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:04.908 [2024-12-10 00:14:39.791569] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:04.908 [2024-12-10 00:14:39.803650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:04.908 [2024-12-10 00:14:39.803931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.908 [2024-12-10 00:14:39.803949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:04.908 [2024-12-10 00:14:39.803956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:04.908 [2024-12-10 00:14:39.804129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:04.908 [2024-12-10 00:14:39.804309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:04.908 [2024-12-10 00:14:39.804317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:04.908 [2024-12-10 00:14:39.804324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:04.908 [2024-12-10 00:14:39.804330] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:04.908 [2024-12-10 00:14:39.816828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:04.908 [2024-12-10 00:14:39.817168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.908 [2024-12-10 00:14:39.817185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:04.908 [2024-12-10 00:14:39.817193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:04.908 [2024-12-10 00:14:39.817375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:04.908 [2024-12-10 00:14:39.817554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:04.908 [2024-12-10 00:14:39.817562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:04.908 [2024-12-10 00:14:39.817569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:04.908 [2024-12-10 00:14:39.817575] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:04.908 [2024-12-10 00:14:39.830011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:04.908 [2024-12-10 00:14:39.830315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.908 [2024-12-10 00:14:39.830333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:04.908 [2024-12-10 00:14:39.830340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:04.908 [2024-12-10 00:14:39.830518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:04.908 [2024-12-10 00:14:39.830695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:04.908 [2024-12-10 00:14:39.830703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:04.908 [2024-12-10 00:14:39.830710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:04.908 [2024-12-10 00:14:39.830717] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.170 [2024-12-10 00:14:39.843171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.170 [2024-12-10 00:14:39.843526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.170 [2024-12-10 00:14:39.843542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.170 [2024-12-10 00:14:39.843550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.170 [2024-12-10 00:14:39.843728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.170 [2024-12-10 00:14:39.843906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.170 [2024-12-10 00:14:39.843914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.170 [2024-12-10 00:14:39.843921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.170 [2024-12-10 00:14:39.843928] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.170 [2024-12-10 00:14:39.856280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.170 [2024-12-10 00:14:39.856534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.170 [2024-12-10 00:14:39.856551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.170 [2024-12-10 00:14:39.856558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.170 [2024-12-10 00:14:39.856732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.170 [2024-12-10 00:14:39.856906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.170 [2024-12-10 00:14:39.856917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.170 [2024-12-10 00:14:39.856924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.170 [2024-12-10 00:14:39.856930] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.170 [2024-12-10 00:14:39.869337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.170 [2024-12-10 00:14:39.869697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.170 [2024-12-10 00:14:39.869714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.170 [2024-12-10 00:14:39.869721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.170 [2024-12-10 00:14:39.869894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.170 [2024-12-10 00:14:39.870068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.170 [2024-12-10 00:14:39.870076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.170 [2024-12-10 00:14:39.870083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.170 [2024-12-10 00:14:39.870089] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.170 [2024-12-10 00:14:39.882153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.170 [2024-12-10 00:14:39.882539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.170 [2024-12-10 00:14:39.882584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.170 [2024-12-10 00:14:39.882606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.170 [2024-12-10 00:14:39.883204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.170 [2024-12-10 00:14:39.883762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.170 [2024-12-10 00:14:39.883771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.170 [2024-12-10 00:14:39.883777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.170 [2024-12-10 00:14:39.883783] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.170 [2024-12-10 00:14:39.895212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.170 [2024-12-10 00:14:39.895630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.170 [2024-12-10 00:14:39.895647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.170 [2024-12-10 00:14:39.895654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.170 [2024-12-10 00:14:39.895827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.170 [2024-12-10 00:14:39.896000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.170 [2024-12-10 00:14:39.896008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.170 [2024-12-10 00:14:39.896014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.170 [2024-12-10 00:14:39.896020] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.170 [2024-12-10 00:14:39.908133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.170 [2024-12-10 00:14:39.908440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.170 [2024-12-10 00:14:39.908456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.170 [2024-12-10 00:14:39.908463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.170 [2024-12-10 00:14:39.908636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.170 [2024-12-10 00:14:39.908809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.170 [2024-12-10 00:14:39.908817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.170 [2024-12-10 00:14:39.908823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.170 [2024-12-10 00:14:39.908829] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.170 [2024-12-10 00:14:39.921251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.170 [2024-12-10 00:14:39.921597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.170 [2024-12-10 00:14:39.921614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.170 [2024-12-10 00:14:39.921621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.170 [2024-12-10 00:14:39.921795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.170 [2024-12-10 00:14:39.921971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.170 [2024-12-10 00:14:39.921980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.170 [2024-12-10 00:14:39.921987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.171 [2024-12-10 00:14:39.921993] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.171 [2024-12-10 00:14:39.934212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.171 [2024-12-10 00:14:39.934578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.171 [2024-12-10 00:14:39.934595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.171 [2024-12-10 00:14:39.934603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.171 [2024-12-10 00:14:39.934776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.171 [2024-12-10 00:14:39.934949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.171 [2024-12-10 00:14:39.934957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.171 [2024-12-10 00:14:39.934963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.171 [2024-12-10 00:14:39.934969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.171 [2024-12-10 00:14:39.947213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.171 [2024-12-10 00:14:39.947497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.171 [2024-12-10 00:14:39.947517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.171 [2024-12-10 00:14:39.947524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.171 [2024-12-10 00:14:39.947697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.171 [2024-12-10 00:14:39.947871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.171 [2024-12-10 00:14:39.947879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.171 [2024-12-10 00:14:39.947885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.171 [2024-12-10 00:14:39.947892] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.171 [2024-12-10 00:14:39.960155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.171 [2024-12-10 00:14:39.960458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.171 [2024-12-10 00:14:39.960475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.171 [2024-12-10 00:14:39.960482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.171 [2024-12-10 00:14:39.960655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.171 [2024-12-10 00:14:39.960827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.171 [2024-12-10 00:14:39.960836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.171 [2024-12-10 00:14:39.960842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.171 [2024-12-10 00:14:39.960848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.171 [2024-12-10 00:14:39.973008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.171 [2024-12-10 00:14:39.973388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.171 [2024-12-10 00:14:39.973405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.171 [2024-12-10 00:14:39.973412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.171 [2024-12-10 00:14:39.973584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.171 [2024-12-10 00:14:39.973756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.171 [2024-12-10 00:14:39.973764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.171 [2024-12-10 00:14:39.973770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.171 [2024-12-10 00:14:39.973776] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.171 [2024-12-10 00:14:39.985906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.171 [2024-12-10 00:14:39.986285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.171 [2024-12-10 00:14:39.986330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.171 [2024-12-10 00:14:39.986353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.171 [2024-12-10 00:14:39.986834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.171 [2024-12-10 00:14:39.986999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.171 [2024-12-10 00:14:39.987007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.171 [2024-12-10 00:14:39.987013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.171 [2024-12-10 00:14:39.987019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.171 [2024-12-10 00:14:39.998761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.171 [2024-12-10 00:14:39.999198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.171 [2024-12-10 00:14:39.999215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.171 [2024-12-10 00:14:39.999223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.171 [2024-12-10 00:14:39.999396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.171 [2024-12-10 00:14:39.999568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.171 [2024-12-10 00:14:39.999576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.171 [2024-12-10 00:14:39.999583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.171 [2024-12-10 00:14:39.999588] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.171 [2024-12-10 00:14:40.011865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.171 [2024-12-10 00:14:40.012232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.171 [2024-12-10 00:14:40.012250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.171 [2024-12-10 00:14:40.012258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.171 [2024-12-10 00:14:40.012437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.171 [2024-12-10 00:14:40.012614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.171 [2024-12-10 00:14:40.012622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.171 [2024-12-10 00:14:40.012628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.171 [2024-12-10 00:14:40.012634] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.171 [2024-12-10 00:14:40.025667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.171 [2024-12-10 00:14:40.026088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.171 [2024-12-10 00:14:40.026114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.171 [2024-12-10 00:14:40.026127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.171 [2024-12-10 00:14:40.026365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.171 [2024-12-10 00:14:40.026599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.171 [2024-12-10 00:14:40.026613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.171 [2024-12-10 00:14:40.026630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.171 [2024-12-10 00:14:40.026641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.171 [2024-12-10 00:14:40.039221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.171 [2024-12-10 00:14:40.039562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.171 [2024-12-10 00:14:40.039587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.171 [2024-12-10 00:14:40.039600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.171 [2024-12-10 00:14:40.039809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.171 [2024-12-10 00:14:40.040021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.171 [2024-12-10 00:14:40.040036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.171 [2024-12-10 00:14:40.040046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.171 [2024-12-10 00:14:40.040057] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.171 [2024-12-10 00:14:40.052446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.171 [2024-12-10 00:14:40.052745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.171 [2024-12-10 00:14:40.052763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.171 [2024-12-10 00:14:40.052771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.171 [2024-12-10 00:14:40.052950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.171 [2024-12-10 00:14:40.053129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.171 [2024-12-10 00:14:40.053137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.171 [2024-12-10 00:14:40.053144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.171 [2024-12-10 00:14:40.053151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.172 [2024-12-10 00:14:40.065426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.172 [2024-12-10 00:14:40.065702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.172 [2024-12-10 00:14:40.065721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.172 [2024-12-10 00:14:40.065729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.172 [2024-12-10 00:14:40.065907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.172 [2024-12-10 00:14:40.066097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.172 [2024-12-10 00:14:40.066105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.172 [2024-12-10 00:14:40.066112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.172 [2024-12-10 00:14:40.066118] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.172 [2024-12-10 00:14:40.078539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.172 [2024-12-10 00:14:40.078899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.172 [2024-12-10 00:14:40.078916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.172 [2024-12-10 00:14:40.078924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.172 [2024-12-10 00:14:40.079102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.172 [2024-12-10 00:14:40.079287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.172 [2024-12-10 00:14:40.079296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.172 [2024-12-10 00:14:40.079303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.172 [2024-12-10 00:14:40.079310] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.172 [2024-12-10 00:14:40.091626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.172 [2024-12-10 00:14:40.091904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.172 [2024-12-10 00:14:40.091921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.172 [2024-12-10 00:14:40.091928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.172 [2024-12-10 00:14:40.092108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.172 [2024-12-10 00:14:40.092293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.172 [2024-12-10 00:14:40.092302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.172 [2024-12-10 00:14:40.092309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.172 [2024-12-10 00:14:40.092315] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.432 [2024-12-10 00:14:40.104767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.432 [2024-12-10 00:14:40.105031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.432 [2024-12-10 00:14:40.105048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.432 [2024-12-10 00:14:40.105055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.432 [2024-12-10 00:14:40.105239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.432 [2024-12-10 00:14:40.105417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.432 [2024-12-10 00:14:40.105425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.432 [2024-12-10 00:14:40.105442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.432 [2024-12-10 00:14:40.105449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.432 [2024-12-10 00:14:40.117841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.432 [2024-12-10 00:14:40.118180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.432 [2024-12-10 00:14:40.118200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.432 [2024-12-10 00:14:40.118207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.432 [2024-12-10 00:14:40.118379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.432 [2024-12-10 00:14:40.118552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.432 [2024-12-10 00:14:40.118561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.432 [2024-12-10 00:14:40.118567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.432 [2024-12-10 00:14:40.118573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.432 [2024-12-10 00:14:40.130979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.432 [2024-12-10 00:14:40.131294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.432 [2024-12-10 00:14:40.131311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.432 [2024-12-10 00:14:40.131319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.432 [2024-12-10 00:14:40.131496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.432 [2024-12-10 00:14:40.131674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.432 [2024-12-10 00:14:40.131683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.432 [2024-12-10 00:14:40.131689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.432 [2024-12-10 00:14:40.131695] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.432 [2024-12-10 00:14:40.144133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.432 [2024-12-10 00:14:40.144550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.432 [2024-12-10 00:14:40.144567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.432 [2024-12-10 00:14:40.144574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.432 [2024-12-10 00:14:40.144751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.433 [2024-12-10 00:14:40.144933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.433 [2024-12-10 00:14:40.144942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.433 [2024-12-10 00:14:40.144948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.433 [2024-12-10 00:14:40.144954] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.433 [2024-12-10 00:14:40.157226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.433 [2024-12-10 00:14:40.157584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.433 [2024-12-10 00:14:40.157601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.433 [2024-12-10 00:14:40.157608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.433 [2024-12-10 00:14:40.157786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.433 [2024-12-10 00:14:40.157968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.433 [2024-12-10 00:14:40.157976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.433 [2024-12-10 00:14:40.157983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.433 [2024-12-10 00:14:40.157989] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.433 [2024-12-10 00:14:40.170429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.433 [2024-12-10 00:14:40.170834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.433 [2024-12-10 00:14:40.170851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.433 [2024-12-10 00:14:40.170858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.433 [2024-12-10 00:14:40.171036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.433 [2024-12-10 00:14:40.171221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.433 [2024-12-10 00:14:40.171229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.433 [2024-12-10 00:14:40.171236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.433 [2024-12-10 00:14:40.171242] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.433 [2024-12-10 00:14:40.183524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.433 [2024-12-10 00:14:40.183865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.433 [2024-12-10 00:14:40.183881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.433 [2024-12-10 00:14:40.183889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.433 [2024-12-10 00:14:40.184067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.433 [2024-12-10 00:14:40.184253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.433 [2024-12-10 00:14:40.184262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.433 [2024-12-10 00:14:40.184268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.433 [2024-12-10 00:14:40.184275] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.433 9472.67 IOPS, 37.00 MiB/s [2024-12-09T23:14:40.369Z] [2024-12-10 00:14:40.196523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.433 [2024-12-10 00:14:40.196974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.433 [2024-12-10 00:14:40.196992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.433 [2024-12-10 00:14:40.196999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.433 [2024-12-10 00:14:40.197185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.433 [2024-12-10 00:14:40.197373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.433 [2024-12-10 00:14:40.197381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.433 [2024-12-10 00:14:40.197394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.433 [2024-12-10 00:14:40.197400] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.433 [2024-12-10 00:14:40.209519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.433 [2024-12-10 00:14:40.209871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.433 [2024-12-10 00:14:40.209888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.433 [2024-12-10 00:14:40.209895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.433 [2024-12-10 00:14:40.210068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.433 [2024-12-10 00:14:40.210246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.433 [2024-12-10 00:14:40.210255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.433 [2024-12-10 00:14:40.210262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.433 [2024-12-10 00:14:40.210268] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.433 [2024-12-10 00:14:40.222573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.433 [2024-12-10 00:14:40.222977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.433 [2024-12-10 00:14:40.222994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.433 [2024-12-10 00:14:40.223001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.433 [2024-12-10 00:14:40.223180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.433 [2024-12-10 00:14:40.223354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.433 [2024-12-10 00:14:40.223363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.433 [2024-12-10 00:14:40.223369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.433 [2024-12-10 00:14:40.223375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.433 [2024-12-10 00:14:40.235708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.433 [2024-12-10 00:14:40.236076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.433 [2024-12-10 00:14:40.236093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.433 [2024-12-10 00:14:40.236100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.433 [2024-12-10 00:14:40.236278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.433 [2024-12-10 00:14:40.236451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.433 [2024-12-10 00:14:40.236459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.433 [2024-12-10 00:14:40.236466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.433 [2024-12-10 00:14:40.236472] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.433 [2024-12-10 00:14:40.248623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.433 [2024-12-10 00:14:40.249072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.433 [2024-12-10 00:14:40.249116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.433 [2024-12-10 00:14:40.249139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.433 [2024-12-10 00:14:40.249675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.433 [2024-12-10 00:14:40.249854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.433 [2024-12-10 00:14:40.249862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.433 [2024-12-10 00:14:40.249868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.433 [2024-12-10 00:14:40.249875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.433 [2024-12-10 00:14:40.261522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.433 [2024-12-10 00:14:40.261858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.433 [2024-12-10 00:14:40.261903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.433 [2024-12-10 00:14:40.261925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.433 [2024-12-10 00:14:40.262481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.433 [2024-12-10 00:14:40.262661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.433 [2024-12-10 00:14:40.262669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.433 [2024-12-10 00:14:40.262675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.433 [2024-12-10 00:14:40.262682] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.433 [2024-12-10 00:14:40.274610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.433 [2024-12-10 00:14:40.274978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.433 [2024-12-10 00:14:40.275022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.433 [2024-12-10 00:14:40.275044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.433 [2024-12-10 00:14:40.275597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.433 [2024-12-10 00:14:40.275775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.433 [2024-12-10 00:14:40.275783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.434 [2024-12-10 00:14:40.275790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.434 [2024-12-10 00:14:40.275796] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.434 [2024-12-10 00:14:40.287685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.434 [2024-12-10 00:14:40.288027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.434 [2024-12-10 00:14:40.288044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.434 [2024-12-10 00:14:40.288054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.434 [2024-12-10 00:14:40.288239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.434 [2024-12-10 00:14:40.288429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.434 [2024-12-10 00:14:40.288437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.434 [2024-12-10 00:14:40.288444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.434 [2024-12-10 00:14:40.288450] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.434 [2024-12-10 00:14:40.300523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.434 [2024-12-10 00:14:40.300974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.434 [2024-12-10 00:14:40.301017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.434 [2024-12-10 00:14:40.301040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.434 [2024-12-10 00:14:40.301527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.434 [2024-12-10 00:14:40.301701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.434 [2024-12-10 00:14:40.301709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.434 [2024-12-10 00:14:40.301716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.434 [2024-12-10 00:14:40.301721] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.434 [2024-12-10 00:14:40.313724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.434 [2024-12-10 00:14:40.314084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.434 [2024-12-10 00:14:40.314101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.434 [2024-12-10 00:14:40.314108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.434 [2024-12-10 00:14:40.314290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.434 [2024-12-10 00:14:40.314470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.434 [2024-12-10 00:14:40.314478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.434 [2024-12-10 00:14:40.314485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.434 [2024-12-10 00:14:40.314491] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.434 [2024-12-10 00:14:40.326827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.434 [2024-12-10 00:14:40.327129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.434 [2024-12-10 00:14:40.327145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.434 [2024-12-10 00:14:40.327152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.434 [2024-12-10 00:14:40.327337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.434 [2024-12-10 00:14:40.327519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.434 [2024-12-10 00:14:40.327527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.434 [2024-12-10 00:14:40.327533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.434 [2024-12-10 00:14:40.327539] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.434 [2024-12-10 00:14:40.339768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.434 [2024-12-10 00:14:40.340222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.434 [2024-12-10 00:14:40.340239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.434 [2024-12-10 00:14:40.340246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.434 [2024-12-10 00:14:40.340424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.434 [2024-12-10 00:14:40.340604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.434 [2024-12-10 00:14:40.340612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.434 [2024-12-10 00:14:40.340618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.434 [2024-12-10 00:14:40.340624] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.434 [2024-12-10 00:14:40.352833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.434 [2024-12-10 00:14:40.353281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.434 [2024-12-10 00:14:40.353298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.434 [2024-12-10 00:14:40.353306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.434 [2024-12-10 00:14:40.353483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.434 [2024-12-10 00:14:40.353662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.434 [2024-12-10 00:14:40.353670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.434 [2024-12-10 00:14:40.353677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.434 [2024-12-10 00:14:40.353683] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.695 [2024-12-10 00:14:40.366080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.695 [2024-12-10 00:14:40.366432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.695 [2024-12-10 00:14:40.366449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.695 [2024-12-10 00:14:40.366456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.695 [2024-12-10 00:14:40.366634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.695 [2024-12-10 00:14:40.366812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.695 [2024-12-10 00:14:40.366820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.695 [2024-12-10 00:14:40.366830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.695 [2024-12-10 00:14:40.366837] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.695 [2024-12-10 00:14:40.378968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.695 [2024-12-10 00:14:40.379338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.695 [2024-12-10 00:14:40.379355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.695 [2024-12-10 00:14:40.379363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.695 [2024-12-10 00:14:40.379540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.695 [2024-12-10 00:14:40.379719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.695 [2024-12-10 00:14:40.379727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.695 [2024-12-10 00:14:40.379734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.695 [2024-12-10 00:14:40.379740] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.695 [2024-12-10 00:14:40.391887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.695 [2024-12-10 00:14:40.392327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.695 [2024-12-10 00:14:40.392344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.695 [2024-12-10 00:14:40.392351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.695 [2024-12-10 00:14:40.392529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.695 [2024-12-10 00:14:40.392706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.695 [2024-12-10 00:14:40.392714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.695 [2024-12-10 00:14:40.392721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.695 [2024-12-10 00:14:40.392727] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.695 [2024-12-10 00:14:40.404958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.695 [2024-12-10 00:14:40.405389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.695 [2024-12-10 00:14:40.405406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.695 [2024-12-10 00:14:40.405413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.695 [2024-12-10 00:14:40.405587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.695 [2024-12-10 00:14:40.405759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.695 [2024-12-10 00:14:40.405767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.695 [2024-12-10 00:14:40.405773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.695 [2024-12-10 00:14:40.405779] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.695 [2024-12-10 00:14:40.418071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.695 [2024-12-10 00:14:40.418501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.695 [2024-12-10 00:14:40.418518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.695 [2024-12-10 00:14:40.418525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.695 [2024-12-10 00:14:40.418698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.695 [2024-12-10 00:14:40.418870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.695 [2024-12-10 00:14:40.418878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.695 [2024-12-10 00:14:40.418884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.695 [2024-12-10 00:14:40.418890] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.695 [2024-12-10 00:14:40.430963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.695 [2024-12-10 00:14:40.431313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.695 [2024-12-10 00:14:40.431330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.695 [2024-12-10 00:14:40.431337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.695 [2024-12-10 00:14:40.431509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.695 [2024-12-10 00:14:40.431681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.695 [2024-12-10 00:14:40.431691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.695 [2024-12-10 00:14:40.431697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.695 [2024-12-10 00:14:40.431704] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.695 [2024-12-10 00:14:40.443969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.695 [2024-12-10 00:14:40.444404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.695 [2024-12-10 00:14:40.444421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.695 [2024-12-10 00:14:40.444428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.695 [2024-12-10 00:14:40.444601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.695 [2024-12-10 00:14:40.444776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.695 [2024-12-10 00:14:40.444784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.695 [2024-12-10 00:14:40.444790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.695 [2024-12-10 00:14:40.444796] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.695 [2024-12-10 00:14:40.456795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.695 [2024-12-10 00:14:40.457196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.695 [2024-12-10 00:14:40.457213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.695 [2024-12-10 00:14:40.457224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.695 [2024-12-10 00:14:40.457397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.695 [2024-12-10 00:14:40.457569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.695 [2024-12-10 00:14:40.457577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.695 [2024-12-10 00:14:40.457583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.695 [2024-12-10 00:14:40.457589] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.695 [2024-12-10 00:14:40.469864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.695 [2024-12-10 00:14:40.470227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.695 [2024-12-10 00:14:40.470244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.695 [2024-12-10 00:14:40.470251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.695 [2024-12-10 00:14:40.470423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.695 [2024-12-10 00:14:40.470595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.695 [2024-12-10 00:14:40.470603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.695 [2024-12-10 00:14:40.470609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.695 [2024-12-10 00:14:40.470615] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.695 [2024-12-10 00:14:40.482786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.695 [2024-12-10 00:14:40.483147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.695 [2024-12-10 00:14:40.483168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.695 [2024-12-10 00:14:40.483175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.695 [2024-12-10 00:14:40.483347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.696 [2024-12-10 00:14:40.483519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.696 [2024-12-10 00:14:40.483528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.696 [2024-12-10 00:14:40.483534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.696 [2024-12-10 00:14:40.483540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.696 [2024-12-10 00:14:40.495728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.696 [2024-12-10 00:14:40.496051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.696 [2024-12-10 00:14:40.496067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.696 [2024-12-10 00:14:40.496074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.696 [2024-12-10 00:14:40.496253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.696 [2024-12-10 00:14:40.496430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.696 [2024-12-10 00:14:40.496437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.696 [2024-12-10 00:14:40.496443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.696 [2024-12-10 00:14:40.496449] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.696 [2024-12-10 00:14:40.508750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.696 [2024-12-10 00:14:40.509080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.696 [2024-12-10 00:14:40.509101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.696 [2024-12-10 00:14:40.509108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.696 [2024-12-10 00:14:40.509287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.696 [2024-12-10 00:14:40.509461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.696 [2024-12-10 00:14:40.509469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.696 [2024-12-10 00:14:40.509475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.696 [2024-12-10 00:14:40.509481] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.696 [2024-12-10 00:14:40.521582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.696 [2024-12-10 00:14:40.522039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.696 [2024-12-10 00:14:40.522083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.696 [2024-12-10 00:14:40.522105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.696 [2024-12-10 00:14:40.522538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.696 [2024-12-10 00:14:40.522718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.696 [2024-12-10 00:14:40.522726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.696 [2024-12-10 00:14:40.522732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.696 [2024-12-10 00:14:40.522738] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.696 [2024-12-10 00:14:40.534648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.696 [2024-12-10 00:14:40.534987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.696 [2024-12-10 00:14:40.535004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.696 [2024-12-10 00:14:40.535011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.696 [2024-12-10 00:14:40.535189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.696 [2024-12-10 00:14:40.535362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.696 [2024-12-10 00:14:40.535370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.696 [2024-12-10 00:14:40.535380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.696 [2024-12-10 00:14:40.535386] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.696 [2024-12-10 00:14:40.547664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.696 [2024-12-10 00:14:40.548006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.696 [2024-12-10 00:14:40.548021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.696 [2024-12-10 00:14:40.548028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.696 [2024-12-10 00:14:40.548207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.696 [2024-12-10 00:14:40.548381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.696 [2024-12-10 00:14:40.548389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.696 [2024-12-10 00:14:40.548395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.696 [2024-12-10 00:14:40.548401] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.696 [2024-12-10 00:14:40.560694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.696 [2024-12-10 00:14:40.561057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.696 [2024-12-10 00:14:40.561073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.696 [2024-12-10 00:14:40.561080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.696 [2024-12-10 00:14:40.561258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.696 [2024-12-10 00:14:40.561432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.696 [2024-12-10 00:14:40.561439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.696 [2024-12-10 00:14:40.561446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.696 [2024-12-10 00:14:40.561452] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.696 [2024-12-10 00:14:40.573581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.696 [2024-12-10 00:14:40.574016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.696 [2024-12-10 00:14:40.574033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.696 [2024-12-10 00:14:40.574040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.696 [2024-12-10 00:14:40.574219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.696 [2024-12-10 00:14:40.574393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.696 [2024-12-10 00:14:40.574400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.696 [2024-12-10 00:14:40.574407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.696 [2024-12-10 00:14:40.574412] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.696 [2024-12-10 00:14:40.586502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.696 [2024-12-10 00:14:40.586826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.696 [2024-12-10 00:14:40.586842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.696 [2024-12-10 00:14:40.586849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.696 [2024-12-10 00:14:40.587022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.696 [2024-12-10 00:14:40.587200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.696 [2024-12-10 00:14:40.587209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.696 [2024-12-10 00:14:40.587216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.696 [2024-12-10 00:14:40.587222] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.696 [2024-12-10 00:14:40.599497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.696 [2024-12-10 00:14:40.599829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.696 [2024-12-10 00:14:40.599846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.697 [2024-12-10 00:14:40.599853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.697 [2024-12-10 00:14:40.600025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.697 [2024-12-10 00:14:40.600204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.697 [2024-12-10 00:14:40.600213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.697 [2024-12-10 00:14:40.600219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.697 [2024-12-10 00:14:40.600225] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.697 [2024-12-10 00:14:40.612447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.697 [2024-12-10 00:14:40.612767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.697 [2024-12-10 00:14:40.612784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.697 [2024-12-10 00:14:40.612791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.697 [2024-12-10 00:14:40.612963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.697 [2024-12-10 00:14:40.613136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.697 [2024-12-10 00:14:40.613144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.697 [2024-12-10 00:14:40.613150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.697 [2024-12-10 00:14:40.613162] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.697 [2024-12-10 00:14:40.625539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.697 [2024-12-10 00:14:40.625947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.697 [2024-12-10 00:14:40.625964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.697 [2024-12-10 00:14:40.625975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.697 [2024-12-10 00:14:40.626154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.697 [2024-12-10 00:14:40.626339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.697 [2024-12-10 00:14:40.626347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.697 [2024-12-10 00:14:40.626353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.697 [2024-12-10 00:14:40.626360] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.957 [2024-12-10 00:14:40.638464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.957 [2024-12-10 00:14:40.638879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.957 [2024-12-10 00:14:40.638895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.957 [2024-12-10 00:14:40.638902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.957 [2024-12-10 00:14:40.639075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.957 [2024-12-10 00:14:40.639255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.957 [2024-12-10 00:14:40.639264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.957 [2024-12-10 00:14:40.639270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.957 [2024-12-10 00:14:40.639276] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.957 [2024-12-10 00:14:40.651561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.957 [2024-12-10 00:14:40.651976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.957 [2024-12-10 00:14:40.651992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.957 [2024-12-10 00:14:40.651999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.957 [2024-12-10 00:14:40.652181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.957 [2024-12-10 00:14:40.652358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.957 [2024-12-10 00:14:40.652367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.957 [2024-12-10 00:14:40.652373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.957 [2024-12-10 00:14:40.652379] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.957 [2024-12-10 00:14:40.664591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.957 [2024-12-10 00:14:40.664997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.957 [2024-12-10 00:14:40.665013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.957 [2024-12-10 00:14:40.665020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.957 [2024-12-10 00:14:40.665200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.957 [2024-12-10 00:14:40.665376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.957 [2024-12-10 00:14:40.665384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.958 [2024-12-10 00:14:40.665390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.958 [2024-12-10 00:14:40.665396] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.958 [2024-12-10 00:14:40.677462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.958 [2024-12-10 00:14:40.677871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.958 [2024-12-10 00:14:40.677887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.958 [2024-12-10 00:14:40.677908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.958 [2024-12-10 00:14:40.678468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.958 [2024-12-10 00:14:40.678658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.958 [2024-12-10 00:14:40.678666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.958 [2024-12-10 00:14:40.678672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.958 [2024-12-10 00:14:40.678678] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.958 [2024-12-10 00:14:40.690527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.958 [2024-12-10 00:14:40.690952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.958 [2024-12-10 00:14:40.690968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.958 [2024-12-10 00:14:40.690975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.958 [2024-12-10 00:14:40.691147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.958 [2024-12-10 00:14:40.691327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.958 [2024-12-10 00:14:40.691336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.958 [2024-12-10 00:14:40.691342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.958 [2024-12-10 00:14:40.691348] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.958 [2024-12-10 00:14:40.703568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.958 [2024-12-10 00:14:40.703991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.958 [2024-12-10 00:14:40.704029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.958 [2024-12-10 00:14:40.704054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.958 [2024-12-10 00:14:40.704584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.958 [2024-12-10 00:14:40.704758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.958 [2024-12-10 00:14:40.704766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.958 [2024-12-10 00:14:40.704776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.958 [2024-12-10 00:14:40.704782] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.958 [2024-12-10 00:14:40.716706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.958 [2024-12-10 00:14:40.717073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.958 [2024-12-10 00:14:40.717117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.958 [2024-12-10 00:14:40.717139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.958 [2024-12-10 00:14:40.717618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.958 [2024-12-10 00:14:40.717792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.958 [2024-12-10 00:14:40.717800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.958 [2024-12-10 00:14:40.717806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.958 [2024-12-10 00:14:40.717812] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.958 [2024-12-10 00:14:40.729612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.958 [2024-12-10 00:14:40.730048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.958 [2024-12-10 00:14:40.730064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.958 [2024-12-10 00:14:40.730072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.958 [2024-12-10 00:14:40.730252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.958 [2024-12-10 00:14:40.730426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.958 [2024-12-10 00:14:40.730435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.958 [2024-12-10 00:14:40.730441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.958 [2024-12-10 00:14:40.730447] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.958 [2024-12-10 00:14:40.742681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.958 [2024-12-10 00:14:40.743087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.958 [2024-12-10 00:14:40.743104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.958 [2024-12-10 00:14:40.743111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.958 [2024-12-10 00:14:40.743290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.958 [2024-12-10 00:14:40.743467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.958 [2024-12-10 00:14:40.743476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.958 [2024-12-10 00:14:40.743482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.958 [2024-12-10 00:14:40.743488] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.958 [2024-12-10 00:14:40.755750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.958 [2024-12-10 00:14:40.756187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.958 [2024-12-10 00:14:40.756203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.958 [2024-12-10 00:14:40.756210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.958 [2024-12-10 00:14:40.756384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.958 [2024-12-10 00:14:40.756558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.958 [2024-12-10 00:14:40.756567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.958 [2024-12-10 00:14:40.756573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.958 [2024-12-10 00:14:40.756579] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.958 [2024-12-10 00:14:40.768746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.958 [2024-12-10 00:14:40.769167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.958 [2024-12-10 00:14:40.769182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.958 [2024-12-10 00:14:40.769190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.958 [2024-12-10 00:14:40.769363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.958 [2024-12-10 00:14:40.769537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.958 [2024-12-10 00:14:40.769544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.958 [2024-12-10 00:14:40.769551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.958 [2024-12-10 00:14:40.769557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.958 [2024-12-10 00:14:40.781696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.958 [2024-12-10 00:14:40.782140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.958 [2024-12-10 00:14:40.782190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.958 [2024-12-10 00:14:40.782217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.958 [2024-12-10 00:14:40.782741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.958 [2024-12-10 00:14:40.782916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.958 [2024-12-10 00:14:40.782924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.958 [2024-12-10 00:14:40.782930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.958 [2024-12-10 00:14:40.782936] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.958 [2024-12-10 00:14:40.794833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.958 [2024-12-10 00:14:40.795272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.958 [2024-12-10 00:14:40.795290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.958 [2024-12-10 00:14:40.795300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.958 [2024-12-10 00:14:40.795473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.958 [2024-12-10 00:14:40.795647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.958 [2024-12-10 00:14:40.795655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.958 [2024-12-10 00:14:40.795661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.958 [2024-12-10 00:14:40.795666] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.959 [2024-12-10 00:14:40.807656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.959 [2024-12-10 00:14:40.808098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.959 [2024-12-10 00:14:40.808142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.959 [2024-12-10 00:14:40.808178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.959 [2024-12-10 00:14:40.808763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.959 [2024-12-10 00:14:40.809216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.959 [2024-12-10 00:14:40.809224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.959 [2024-12-10 00:14:40.809231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.959 [2024-12-10 00:14:40.809237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.959 [2024-12-10 00:14:40.820568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.959 [2024-12-10 00:14:40.820961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.959 [2024-12-10 00:14:40.820977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.959 [2024-12-10 00:14:40.820984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.959 [2024-12-10 00:14:40.821148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.959 [2024-12-10 00:14:40.821317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.959 [2024-12-10 00:14:40.821326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.959 [2024-12-10 00:14:40.821332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.959 [2024-12-10 00:14:40.821337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.959 [2024-12-10 00:14:40.833495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.959 [2024-12-10 00:14:40.833931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.959 [2024-12-10 00:14:40.833947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.959 [2024-12-10 00:14:40.833954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.959 [2024-12-10 00:14:40.834118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.959 [2024-12-10 00:14:40.834289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.959 [2024-12-10 00:14:40.834304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.959 [2024-12-10 00:14:40.834310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.959 [2024-12-10 00:14:40.834316] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.959 [2024-12-10 00:14:40.846339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.959 [2024-12-10 00:14:40.846614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.959 [2024-12-10 00:14:40.846630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.959 [2024-12-10 00:14:40.846638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.959 [2024-12-10 00:14:40.846801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.959 [2024-12-10 00:14:40.846965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.959 [2024-12-10 00:14:40.846973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.959 [2024-12-10 00:14:40.846978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.959 [2024-12-10 00:14:40.846984] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.959 [2024-12-10 00:14:40.859342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.959 [2024-12-10 00:14:40.859738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.959 [2024-12-10 00:14:40.859783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.959 [2024-12-10 00:14:40.859805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.959 [2024-12-10 00:14:40.860302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.959 [2024-12-10 00:14:40.860466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.959 [2024-12-10 00:14:40.860474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.959 [2024-12-10 00:14:40.860480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.959 [2024-12-10 00:14:40.860486] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.959 [2024-12-10 00:14:40.872277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.959 [2024-12-10 00:14:40.872699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.959 [2024-12-10 00:14:40.872743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.959 [2024-12-10 00:14:40.872766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.959 [2024-12-10 00:14:40.873275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.959 [2024-12-10 00:14:40.873440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.959 [2024-12-10 00:14:40.873448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.959 [2024-12-10 00:14:40.873454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.959 [2024-12-10 00:14:40.873463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:05.959 [2024-12-10 00:14:40.885217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:05.959 [2024-12-10 00:14:40.885602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:05.959 [2024-12-10 00:14:40.885619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:05.959 [2024-12-10 00:14:40.885626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:05.959 [2024-12-10 00:14:40.885804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:05.959 [2024-12-10 00:14:40.885982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:05.959 [2024-12-10 00:14:40.885990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:05.959 [2024-12-10 00:14:40.885996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:05.959 [2024-12-10 00:14:40.886002] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.219 [2024-12-10 00:14:40.898316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.219 [2024-12-10 00:14:40.898722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.219 [2024-12-10 00:14:40.898766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.219 [2024-12-10 00:14:40.898788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.219 [2024-12-10 00:14:40.899270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.220 [2024-12-10 00:14:40.899435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.220 [2024-12-10 00:14:40.899443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.220 [2024-12-10 00:14:40.899449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.220 [2024-12-10 00:14:40.899454] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.220 [2024-12-10 00:14:40.911314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.220 [2024-12-10 00:14:40.911698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.220 [2024-12-10 00:14:40.911714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.220 [2024-12-10 00:14:40.911720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.220 [2024-12-10 00:14:40.911884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.220 [2024-12-10 00:14:40.912049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.220 [2024-12-10 00:14:40.912057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.220 [2024-12-10 00:14:40.912063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.220 [2024-12-10 00:14:40.912069] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.220 [2024-12-10 00:14:40.924124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.220 [2024-12-10 00:14:40.924532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.220 [2024-12-10 00:14:40.924549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.220 [2024-12-10 00:14:40.924556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.220 [2024-12-10 00:14:40.924720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.220 [2024-12-10 00:14:40.924884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.220 [2024-12-10 00:14:40.924891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.220 [2024-12-10 00:14:40.924898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.220 [2024-12-10 00:14:40.924903] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.220 [2024-12-10 00:14:40.937055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.220 [2024-12-10 00:14:40.937371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.220 [2024-12-10 00:14:40.937388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.220 [2024-12-10 00:14:40.937395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.220 [2024-12-10 00:14:40.937558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.220 [2024-12-10 00:14:40.937721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.220 [2024-12-10 00:14:40.937728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.220 [2024-12-10 00:14:40.937734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.220 [2024-12-10 00:14:40.937740] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.220 [2024-12-10 00:14:40.949990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.220 [2024-12-10 00:14:40.950379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.220 [2024-12-10 00:14:40.950396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.220 [2024-12-10 00:14:40.950402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.220 [2024-12-10 00:14:40.950566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.220 [2024-12-10 00:14:40.950730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.220 [2024-12-10 00:14:40.950738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.220 [2024-12-10 00:14:40.950743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.220 [2024-12-10 00:14:40.950749] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.220 [2024-12-10 00:14:40.962836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.220 [2024-12-10 00:14:40.963250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.220 [2024-12-10 00:14:40.963267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.220 [2024-12-10 00:14:40.963274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.220 [2024-12-10 00:14:40.963451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.220 [2024-12-10 00:14:40.963624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.220 [2024-12-10 00:14:40.963632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.220 [2024-12-10 00:14:40.963638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.220 [2024-12-10 00:14:40.963644] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.220 [2024-12-10 00:14:40.975694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.220 [2024-12-10 00:14:40.976014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.220 [2024-12-10 00:14:40.976030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.220 [2024-12-10 00:14:40.976037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.220 [2024-12-10 00:14:40.976206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.220 [2024-12-10 00:14:40.976370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.220 [2024-12-10 00:14:40.976378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.220 [2024-12-10 00:14:40.976384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.220 [2024-12-10 00:14:40.976390] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.220 [2024-12-10 00:14:40.988559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.220 [2024-12-10 00:14:40.988957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.220 [2024-12-10 00:14:40.988973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.220 [2024-12-10 00:14:40.988979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.220 [2024-12-10 00:14:40.989142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.220 [2024-12-10 00:14:40.989314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.220 [2024-12-10 00:14:40.989322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.220 [2024-12-10 00:14:40.989328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.221 [2024-12-10 00:14:40.989333] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.221 [2024-12-10 00:14:41.001487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.221 [2024-12-10 00:14:41.001879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.221 [2024-12-10 00:14:41.001895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.221 [2024-12-10 00:14:41.001902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.221 [2024-12-10 00:14:41.002065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.221 [2024-12-10 00:14:41.002235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.221 [2024-12-10 00:14:41.002246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.221 [2024-12-10 00:14:41.002252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.221 [2024-12-10 00:14:41.002258] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.221 [2024-12-10 00:14:41.014304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.221 [2024-12-10 00:14:41.014696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.221 [2024-12-10 00:14:41.014712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.221 [2024-12-10 00:14:41.014718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.221 [2024-12-10 00:14:41.014883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.221 [2024-12-10 00:14:41.015047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.221 [2024-12-10 00:14:41.015055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.221 [2024-12-10 00:14:41.015061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.221 [2024-12-10 00:14:41.015067] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.221 [2024-12-10 00:14:41.027224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.221 [2024-12-10 00:14:41.027612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.221 [2024-12-10 00:14:41.027628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.221 [2024-12-10 00:14:41.027635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.221 [2024-12-10 00:14:41.027798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.221 [2024-12-10 00:14:41.027961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.221 [2024-12-10 00:14:41.027969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.221 [2024-12-10 00:14:41.027975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.221 [2024-12-10 00:14:41.027981] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.221 [2024-12-10 00:14:41.040076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.221 [2024-12-10 00:14:41.040502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.221 [2024-12-10 00:14:41.040520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.221 [2024-12-10 00:14:41.040527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.221 [2024-12-10 00:14:41.040700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.221 [2024-12-10 00:14:41.040874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.221 [2024-12-10 00:14:41.040882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.221 [2024-12-10 00:14:41.040889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.221 [2024-12-10 00:14:41.040899] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.221 [2024-12-10 00:14:41.053108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.221 [2024-12-10 00:14:41.053533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.221 [2024-12-10 00:14:41.053569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.221 [2024-12-10 00:14:41.053594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.221 [2024-12-10 00:14:41.054188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.221 [2024-12-10 00:14:41.054775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.221 [2024-12-10 00:14:41.054797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.221 [2024-12-10 00:14:41.054804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.221 [2024-12-10 00:14:41.054811] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.221 [2024-12-10 00:14:41.066219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.221 [2024-12-10 00:14:41.066630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.221 [2024-12-10 00:14:41.066647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.221 [2024-12-10 00:14:41.066654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.221 [2024-12-10 00:14:41.066827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.221 [2024-12-10 00:14:41.067000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.221 [2024-12-10 00:14:41.067008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.221 [2024-12-10 00:14:41.067014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.221 [2024-12-10 00:14:41.067020] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.221 [2024-12-10 00:14:41.079116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.221 [2024-12-10 00:14:41.079486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.221 [2024-12-10 00:14:41.079502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.221 [2024-12-10 00:14:41.079509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.221 [2024-12-10 00:14:41.079673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.221 [2024-12-10 00:14:41.079836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.221 [2024-12-10 00:14:41.079843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.221 [2024-12-10 00:14:41.079849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.221 [2024-12-10 00:14:41.079855] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.221 [2024-12-10 00:14:41.092023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.221 [2024-12-10 00:14:41.092339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.222 [2024-12-10 00:14:41.092355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.222 [2024-12-10 00:14:41.092362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.222 [2024-12-10 00:14:41.092524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.222 [2024-12-10 00:14:41.092687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.222 [2024-12-10 00:14:41.092695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.222 [2024-12-10 00:14:41.092701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.222 [2024-12-10 00:14:41.092706] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.222 [2024-12-10 00:14:41.104959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.222 [2024-12-10 00:14:41.105359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.222 [2024-12-10 00:14:41.105403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.222 [2024-12-10 00:14:41.105426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.222 [2024-12-10 00:14:41.105928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.222 [2024-12-10 00:14:41.106092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.222 [2024-12-10 00:14:41.106100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.222 [2024-12-10 00:14:41.106106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.222 [2024-12-10 00:14:41.106111] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.222 [2024-12-10 00:14:41.117782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.222 [2024-12-10 00:14:41.118172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.222 [2024-12-10 00:14:41.118189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.222 [2024-12-10 00:14:41.118195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.222 [2024-12-10 00:14:41.118360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.222 [2024-12-10 00:14:41.118525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.222 [2024-12-10 00:14:41.118533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.222 [2024-12-10 00:14:41.118538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.222 [2024-12-10 00:14:41.118545] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.222 [2024-12-10 00:14:41.130694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.222 [2024-12-10 00:14:41.131088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.222 [2024-12-10 00:14:41.131104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.222 [2024-12-10 00:14:41.131111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.222 [2024-12-10 00:14:41.131284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.222 [2024-12-10 00:14:41.131449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.222 [2024-12-10 00:14:41.131457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.222 [2024-12-10 00:14:41.131463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.222 [2024-12-10 00:14:41.131468] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.222 [2024-12-10 00:14:41.143623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.222 [2024-12-10 00:14:41.144015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.222 [2024-12-10 00:14:41.144031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.222 [2024-12-10 00:14:41.144038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.222 [2024-12-10 00:14:41.144208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.222 [2024-12-10 00:14:41.144372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.222 [2024-12-10 00:14:41.144379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.222 [2024-12-10 00:14:41.144385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.222 [2024-12-10 00:14:41.144391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.482 [2024-12-10 00:14:41.156647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.482 [2024-12-10 00:14:41.157052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.482 [2024-12-10 00:14:41.157069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.482 [2024-12-10 00:14:41.157076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.482 [2024-12-10 00:14:41.157256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.482 [2024-12-10 00:14:41.157440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.482 [2024-12-10 00:14:41.157447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.482 [2024-12-10 00:14:41.157453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.482 [2024-12-10 00:14:41.157459] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.482 [2024-12-10 00:14:41.169839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.482 [2024-12-10 00:14:41.170251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.482 [2024-12-10 00:14:41.170268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.482 [2024-12-10 00:14:41.170275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.482 [2024-12-10 00:14:41.170453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.482 [2024-12-10 00:14:41.170632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.482 [2024-12-10 00:14:41.170643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.482 [2024-12-10 00:14:41.170650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.482 [2024-12-10 00:14:41.170656] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.483 [2024-12-10 00:14:41.182929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.483 [2024-12-10 00:14:41.183334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.483 [2024-12-10 00:14:41.183351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.483 [2024-12-10 00:14:41.183359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.483 [2024-12-10 00:14:41.183537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.483 [2024-12-10 00:14:41.183715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.483 [2024-12-10 00:14:41.183723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.483 [2024-12-10 00:14:41.183729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.483 [2024-12-10 00:14:41.183736] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.483 7104.50 IOPS, 27.75 MiB/s [2024-12-09T23:14:41.419Z] [2024-12-10 00:14:41.196135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.483 [2024-12-10 00:14:41.196494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.483 [2024-12-10 00:14:41.196511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.483 [2024-12-10 00:14:41.196518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.483 [2024-12-10 00:14:41.196696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.483 [2024-12-10 00:14:41.196874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.483 [2024-12-10 00:14:41.196882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.483 [2024-12-10 00:14:41.196889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.483 [2024-12-10 00:14:41.196895] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.483 [2024-12-10 00:14:41.209322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.483 [2024-12-10 00:14:41.209659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.483 [2024-12-10 00:14:41.209676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.483 [2024-12-10 00:14:41.209684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.483 [2024-12-10 00:14:41.209861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.483 [2024-12-10 00:14:41.210041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.483 [2024-12-10 00:14:41.210050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.483 [2024-12-10 00:14:41.210056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.483 [2024-12-10 00:14:41.210066] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.483 [2024-12-10 00:14:41.222503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.483 [2024-12-10 00:14:41.222912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.483 [2024-12-10 00:14:41.222929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.483 [2024-12-10 00:14:41.222937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.483 [2024-12-10 00:14:41.223115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.483 [2024-12-10 00:14:41.223301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.483 [2024-12-10 00:14:41.223310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.483 [2024-12-10 00:14:41.223316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.483 [2024-12-10 00:14:41.223323] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.483 [2024-12-10 00:14:41.235614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.483 [2024-12-10 00:14:41.236039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.483 [2024-12-10 00:14:41.236055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.483 [2024-12-10 00:14:41.236062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.483 [2024-12-10 00:14:41.236246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.483 [2024-12-10 00:14:41.236424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.483 [2024-12-10 00:14:41.236433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.483 [2024-12-10 00:14:41.236439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.483 [2024-12-10 00:14:41.236445] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.483 [2024-12-10 00:14:41.248744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.483 [2024-12-10 00:14:41.249182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.483 [2024-12-10 00:14:41.249199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.483 [2024-12-10 00:14:41.249207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.483 [2024-12-10 00:14:41.249385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.483 [2024-12-10 00:14:41.249564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.483 [2024-12-10 00:14:41.249573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.483 [2024-12-10 00:14:41.249579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.483 [2024-12-10 00:14:41.249585] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.483 [2024-12-10 00:14:41.261855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.483 [2024-12-10 00:14:41.262295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.483 [2024-12-10 00:14:41.262347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.483 [2024-12-10 00:14:41.262370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.483 [2024-12-10 00:14:41.262945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.483 [2024-12-10 00:14:41.263119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.483 [2024-12-10 00:14:41.263127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.483 [2024-12-10 00:14:41.263133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.483 [2024-12-10 00:14:41.263139] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.483 [2024-12-10 00:14:41.274713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.483 [2024-12-10 00:14:41.275123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.483 [2024-12-10 00:14:41.275179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.483 [2024-12-10 00:14:41.275204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.483 [2024-12-10 00:14:41.275787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.483 [2024-12-10 00:14:41.276337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.483 [2024-12-10 00:14:41.276345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.483 [2024-12-10 00:14:41.276351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.483 [2024-12-10 00:14:41.276357] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.483 [2024-12-10 00:14:41.287693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.483 [2024-12-10 00:14:41.288115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.483 [2024-12-10 00:14:41.288132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.483 [2024-12-10 00:14:41.288139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.483 [2024-12-10 00:14:41.288318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.483 [2024-12-10 00:14:41.288491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.483 [2024-12-10 00:14:41.288499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.483 [2024-12-10 00:14:41.288506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.483 [2024-12-10 00:14:41.288512] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.483 [2024-12-10 00:14:41.300562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.483 [2024-12-10 00:14:41.300997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.483 [2024-12-10 00:14:41.301039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.483 [2024-12-10 00:14:41.301062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.483 [2024-12-10 00:14:41.301552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.483 [2024-12-10 00:14:41.301727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.483 [2024-12-10 00:14:41.301735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.483 [2024-12-10 00:14:41.301741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.483 [2024-12-10 00:14:41.301747] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.484 [2024-12-10 00:14:41.313643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.484 [2024-12-10 00:14:41.314012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.484 [2024-12-10 00:14:41.314055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.484 [2024-12-10 00:14:41.314077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.484 [2024-12-10 00:14:41.314675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.484 [2024-12-10 00:14:41.315270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.484 [2024-12-10 00:14:41.315279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.484 [2024-12-10 00:14:41.315285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.484 [2024-12-10 00:14:41.315291] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.484 [2024-12-10 00:14:41.326711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.484 [2024-12-10 00:14:41.327088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.484 [2024-12-10 00:14:41.327104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.484 [2024-12-10 00:14:41.327112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.484 [2024-12-10 00:14:41.327289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.484 [2024-12-10 00:14:41.327464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.484 [2024-12-10 00:14:41.327472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.484 [2024-12-10 00:14:41.327478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.484 [2024-12-10 00:14:41.327484] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.484 [2024-12-10 00:14:41.339567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.484 [2024-12-10 00:14:41.339974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.484 [2024-12-10 00:14:41.339991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.484 [2024-12-10 00:14:41.339997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.484 [2024-12-10 00:14:41.340165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.484 [2024-12-10 00:14:41.340329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.484 [2024-12-10 00:14:41.340340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.484 [2024-12-10 00:14:41.340346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.484 [2024-12-10 00:14:41.340351] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.484 [2024-12-10 00:14:41.352519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.484 [2024-12-10 00:14:41.352962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.484 [2024-12-10 00:14:41.352977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.484 [2024-12-10 00:14:41.352984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.484 [2024-12-10 00:14:41.353148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.484 [2024-12-10 00:14:41.353315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.484 [2024-12-10 00:14:41.353324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.484 [2024-12-10 00:14:41.353330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.484 [2024-12-10 00:14:41.353336] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.484 [2024-12-10 00:14:41.365470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.484 [2024-12-10 00:14:41.365922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.484 [2024-12-10 00:14:41.365965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.484 [2024-12-10 00:14:41.365987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.484 [2024-12-10 00:14:41.366483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.484 [2024-12-10 00:14:41.366657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.484 [2024-12-10 00:14:41.366665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.484 [2024-12-10 00:14:41.366672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.484 [2024-12-10 00:14:41.366678] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.484 [2024-12-10 00:14:41.378350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.484 [2024-12-10 00:14:41.378629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.484 [2024-12-10 00:14:41.378645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.484 [2024-12-10 00:14:41.378652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.484 [2024-12-10 00:14:41.378816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.484 [2024-12-10 00:14:41.378980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.484 [2024-12-10 00:14:41.378987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.484 [2024-12-10 00:14:41.378993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.484 [2024-12-10 00:14:41.378999] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.484 [2024-12-10 00:14:41.391198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.484 [2024-12-10 00:14:41.391474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.484 [2024-12-10 00:14:41.391490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.484 [2024-12-10 00:14:41.391496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.484 [2024-12-10 00:14:41.391659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.484 [2024-12-10 00:14:41.391824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.484 [2024-12-10 00:14:41.391831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.484 [2024-12-10 00:14:41.391837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.484 [2024-12-10 00:14:41.391843] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.484 [2024-12-10 00:14:41.404115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.484 [2024-12-10 00:14:41.404459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.484 [2024-12-10 00:14:41.404475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.484 [2024-12-10 00:14:41.404482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.484 [2024-12-10 00:14:41.404645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.484 [2024-12-10 00:14:41.404810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.484 [2024-12-10 00:14:41.404818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.484 [2024-12-10 00:14:41.404824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.484 [2024-12-10 00:14:41.404829] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.745 [2024-12-10 00:14:41.417265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.745 [2024-12-10 00:14:41.417686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-12-10 00:14:41.417702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.745 [2024-12-10 00:14:41.417709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.745 [2024-12-10 00:14:41.417872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.745 [2024-12-10 00:14:41.418036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.745 [2024-12-10 00:14:41.418044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.745 [2024-12-10 00:14:41.418049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.745 [2024-12-10 00:14:41.418055] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.745 [2024-12-10 00:14:41.430187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.745 [2024-12-10 00:14:41.430582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-12-10 00:14:41.430601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.745 [2024-12-10 00:14:41.430607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.745 [2024-12-10 00:14:41.430770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.745 [2024-12-10 00:14:41.430934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.745 [2024-12-10 00:14:41.430942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.745 [2024-12-10 00:14:41.430948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.745 [2024-12-10 00:14:41.430954] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.745 [2024-12-10 00:14:41.443060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.745 [2024-12-10 00:14:41.443391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.745 [2024-12-10 00:14:41.443408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.745 [2024-12-10 00:14:41.443414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.745 [2024-12-10 00:14:41.443577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.745 [2024-12-10 00:14:41.443741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.746 [2024-12-10 00:14:41.443749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.746 [2024-12-10 00:14:41.443755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.746 [2024-12-10 00:14:41.443761] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.746 [2024-12-10 00:14:41.455921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.746 [2024-12-10 00:14:41.456317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-12-10 00:14:41.456367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.746 [2024-12-10 00:14:41.456390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.746 [2024-12-10 00:14:41.456856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.746 [2024-12-10 00:14:41.457021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.746 [2024-12-10 00:14:41.457029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.746 [2024-12-10 00:14:41.457035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.746 [2024-12-10 00:14:41.457040] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.746 [2024-12-10 00:14:41.468840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.746 [2024-12-10 00:14:41.469313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-12-10 00:14:41.469357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.746 [2024-12-10 00:14:41.469380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.746 [2024-12-10 00:14:41.469741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.746 [2024-12-10 00:14:41.469905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.746 [2024-12-10 00:14:41.469913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.746 [2024-12-10 00:14:41.469919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.746 [2024-12-10 00:14:41.469925] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.746 [2024-12-10 00:14:41.481729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.746 [2024-12-10 00:14:41.482076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-12-10 00:14:41.482092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.746 [2024-12-10 00:14:41.482098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.746 [2024-12-10 00:14:41.482266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.746 [2024-12-10 00:14:41.482436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.746 [2024-12-10 00:14:41.482444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.746 [2024-12-10 00:14:41.482450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.746 [2024-12-10 00:14:41.482455] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.746 [2024-12-10 00:14:41.494631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.746 [2024-12-10 00:14:41.495037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-12-10 00:14:41.495055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.746 [2024-12-10 00:14:41.495062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.746 [2024-12-10 00:14:41.495231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.746 [2024-12-10 00:14:41.495395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.746 [2024-12-10 00:14:41.495403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.746 [2024-12-10 00:14:41.495409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.746 [2024-12-10 00:14:41.495415] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.746 [2024-12-10 00:14:41.507533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.746 [2024-12-10 00:14:41.507999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-12-10 00:14:41.508043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.746 [2024-12-10 00:14:41.508066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.746 [2024-12-10 00:14:41.508524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.746 [2024-12-10 00:14:41.508698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.746 [2024-12-10 00:14:41.508707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.746 [2024-12-10 00:14:41.508717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.746 [2024-12-10 00:14:41.508723] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.746 [2024-12-10 00:14:41.520405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.746 [2024-12-10 00:14:41.520722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-12-10 00:14:41.520739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.746 [2024-12-10 00:14:41.520745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.746 [2024-12-10 00:14:41.520909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.746 [2024-12-10 00:14:41.521072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.746 [2024-12-10 00:14:41.521079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.746 [2024-12-10 00:14:41.521085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.746 [2024-12-10 00:14:41.521091] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.746 [2024-12-10 00:14:41.533267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.746 [2024-12-10 00:14:41.533592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-12-10 00:14:41.533608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.746 [2024-12-10 00:14:41.533614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.746 [2024-12-10 00:14:41.533779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.746 [2024-12-10 00:14:41.533943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.746 [2024-12-10 00:14:41.533951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.746 [2024-12-10 00:14:41.533957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.746 [2024-12-10 00:14:41.533963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.746 [2024-12-10 00:14:41.546075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.746 [2024-12-10 00:14:41.546390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-12-10 00:14:41.546407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.746 [2024-12-10 00:14:41.546414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.746 [2024-12-10 00:14:41.546577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.746 [2024-12-10 00:14:41.546741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.746 [2024-12-10 00:14:41.546749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.746 [2024-12-10 00:14:41.546755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.746 [2024-12-10 00:14:41.546761] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.746 [2024-12-10 00:14:41.558941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.746 [2024-12-10 00:14:41.559341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-12-10 00:14:41.559358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.746 [2024-12-10 00:14:41.559365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.746 [2024-12-10 00:14:41.559538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.746 [2024-12-10 00:14:41.559712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.746 [2024-12-10 00:14:41.559721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.746 [2024-12-10 00:14:41.559729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.746 [2024-12-10 00:14:41.559735] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.746 [2024-12-10 00:14:41.571930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.746 [2024-12-10 00:14:41.572280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.746 [2024-12-10 00:14:41.572297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.746 [2024-12-10 00:14:41.572304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.746 [2024-12-10 00:14:41.572477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.746 [2024-12-10 00:14:41.572651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.746 [2024-12-10 00:14:41.572659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.747 [2024-12-10 00:14:41.572665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.747 [2024-12-10 00:14:41.572671] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.747 [2024-12-10 00:14:41.584920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.747 [2024-12-10 00:14:41.585265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-12-10 00:14:41.585282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.747 [2024-12-10 00:14:41.585290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.747 [2024-12-10 00:14:41.585463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.747 [2024-12-10 00:14:41.585636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.747 [2024-12-10 00:14:41.585644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.747 [2024-12-10 00:14:41.585650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.747 [2024-12-10 00:14:41.585657] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.747 [2024-12-10 00:14:41.597911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.747 [2024-12-10 00:14:41.598319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-12-10 00:14:41.598372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.747 [2024-12-10 00:14:41.598395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.747 [2024-12-10 00:14:41.598938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.747 [2024-12-10 00:14:41.599102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.747 [2024-12-10 00:14:41.599110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.747 [2024-12-10 00:14:41.599116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.747 [2024-12-10 00:14:41.599121] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.747 [2024-12-10 00:14:41.610814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.747 [2024-12-10 00:14:41.611183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-12-10 00:14:41.611199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.747 [2024-12-10 00:14:41.611206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.747 [2024-12-10 00:14:41.611379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.747 [2024-12-10 00:14:41.611553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.747 [2024-12-10 00:14:41.611561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.747 [2024-12-10 00:14:41.611567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.747 [2024-12-10 00:14:41.611573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.747 [2024-12-10 00:14:41.623624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.747 [2024-12-10 00:14:41.624031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-12-10 00:14:41.624047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.747 [2024-12-10 00:14:41.624053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.747 [2024-12-10 00:14:41.624221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.747 [2024-12-10 00:14:41.624389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.747 [2024-12-10 00:14:41.624397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.747 [2024-12-10 00:14:41.624403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.747 [2024-12-10 00:14:41.624409] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.747 [2024-12-10 00:14:41.636512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.747 [2024-12-10 00:14:41.636950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-12-10 00:14:41.636967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.747 [2024-12-10 00:14:41.636973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.747 [2024-12-10 00:14:41.637137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.747 [2024-12-10 00:14:41.637309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.747 [2024-12-10 00:14:41.637317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.747 [2024-12-10 00:14:41.637323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.747 [2024-12-10 00:14:41.637329] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.747 [2024-12-10 00:14:41.649452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.747 [2024-12-10 00:14:41.649889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-12-10 00:14:41.649933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.747 [2024-12-10 00:14:41.649956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.747 [2024-12-10 00:14:41.650554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.747 [2024-12-10 00:14:41.651131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.747 [2024-12-10 00:14:41.651139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.747 [2024-12-10 00:14:41.651145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.747 [2024-12-10 00:14:41.651151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.747 [2024-12-10 00:14:41.662410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.747 [2024-12-10 00:14:41.662750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-12-10 00:14:41.662765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.747 [2024-12-10 00:14:41.662772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.747 [2024-12-10 00:14:41.662936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.747 [2024-12-10 00:14:41.663101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.747 [2024-12-10 00:14:41.663109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.747 [2024-12-10 00:14:41.663115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.747 [2024-12-10 00:14:41.663121] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:06.747 [2024-12-10 00:14:41.675524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:06.747 [2024-12-10 00:14:41.675947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:06.747 [2024-12-10 00:14:41.675963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:06.747 [2024-12-10 00:14:41.675971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:06.747 [2024-12-10 00:14:41.676149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:06.747 [2024-12-10 00:14:41.676332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:06.747 [2024-12-10 00:14:41.676341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:06.747 [2024-12-10 00:14:41.676351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:06.747 [2024-12-10 00:14:41.676357] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.008 [2024-12-10 00:14:41.688542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.008 [2024-12-10 00:14:41.688926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.008 [2024-12-10 00:14:41.688970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.008 [2024-12-10 00:14:41.688992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.008 [2024-12-10 00:14:41.689603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.008 [2024-12-10 00:14:41.690203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.008 [2024-12-10 00:14:41.690210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.008 [2024-12-10 00:14:41.690216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.008 [2024-12-10 00:14:41.690222] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.008 [2024-12-10 00:14:41.701375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.008 [2024-12-10 00:14:41.701784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.008 [2024-12-10 00:14:41.701800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.008 [2024-12-10 00:14:41.701806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.008 [2024-12-10 00:14:41.701970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.008 [2024-12-10 00:14:41.702132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.008 [2024-12-10 00:14:41.702140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.008 [2024-12-10 00:14:41.702146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.008 [2024-12-10 00:14:41.702152] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.008 [2024-12-10 00:14:41.714298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.008 [2024-12-10 00:14:41.714630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.008 [2024-12-10 00:14:41.714646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.008 [2024-12-10 00:14:41.714654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.008 [2024-12-10 00:14:41.714827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.008 [2024-12-10 00:14:41.715001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.008 [2024-12-10 00:14:41.715009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.008 [2024-12-10 00:14:41.715015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.008 [2024-12-10 00:14:41.715021] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.008 [2024-12-10 00:14:41.727135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.008 [2024-12-10 00:14:41.727454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.008 [2024-12-10 00:14:41.727470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.008 [2024-12-10 00:14:41.727477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.008 [2024-12-10 00:14:41.727641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.008 [2024-12-10 00:14:41.727804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.008 [2024-12-10 00:14:41.727812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.008 [2024-12-10 00:14:41.727818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.008 [2024-12-10 00:14:41.727824] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.008 [2024-12-10 00:14:41.739994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.008 [2024-12-10 00:14:41.740339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.008 [2024-12-10 00:14:41.740356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.008 [2024-12-10 00:14:41.740363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.008 [2024-12-10 00:14:41.740526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.008 [2024-12-10 00:14:41.740688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.008 [2024-12-10 00:14:41.740696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.008 [2024-12-10 00:14:41.740702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.008 [2024-12-10 00:14:41.740708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.008 [2024-12-10 00:14:41.752872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.008 [2024-12-10 00:14:41.753309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.008 [2024-12-10 00:14:41.753326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.008 [2024-12-10 00:14:41.753333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.008 [2024-12-10 00:14:41.753512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.008 [2024-12-10 00:14:41.753675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.008 [2024-12-10 00:14:41.753683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.008 [2024-12-10 00:14:41.753689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.008 [2024-12-10 00:14:41.753694] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.008 [2024-12-10 00:14:41.765801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.008 [2024-12-10 00:14:41.766205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.008 [2024-12-10 00:14:41.766249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.008 [2024-12-10 00:14:41.766279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.008 [2024-12-10 00:14:41.766862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.008 [2024-12-10 00:14:41.767464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.008 [2024-12-10 00:14:41.767491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.008 [2024-12-10 00:14:41.767512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.008 [2024-12-10 00:14:41.767519] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.008 [2024-12-10 00:14:41.781105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.008 [2024-12-10 00:14:41.781518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.008 [2024-12-10 00:14:41.781538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.008 [2024-12-10 00:14:41.781548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.008 [2024-12-10 00:14:41.781802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.008 [2024-12-10 00:14:41.782057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.008 [2024-12-10 00:14:41.782068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.008 [2024-12-10 00:14:41.782078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.008 [2024-12-10 00:14:41.782086] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.008 [2024-12-10 00:14:41.794162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.008 [2024-12-10 00:14:41.794580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.008 [2024-12-10 00:14:41.794624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.008 [2024-12-10 00:14:41.794647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.008 [2024-12-10 00:14:41.795245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.008 [2024-12-10 00:14:41.795661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.008 [2024-12-10 00:14:41.795668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.008 [2024-12-10 00:14:41.795674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.008 [2024-12-10 00:14:41.795680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.008 [2024-12-10 00:14:41.807024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.008 [2024-12-10 00:14:41.807445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.008 [2024-12-10 00:14:41.807462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.008 [2024-12-10 00:14:41.807469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.008 [2024-12-10 00:14:41.807631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.008 [2024-12-10 00:14:41.807798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.008 [2024-12-10 00:14:41.807806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.008 [2024-12-10 00:14:41.807812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.008 [2024-12-10 00:14:41.807817] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.008 [2024-12-10 00:14:41.820093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.008 [2024-12-10 00:14:41.820497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.008 [2024-12-10 00:14:41.820515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.008 [2024-12-10 00:14:41.820522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.008 [2024-12-10 00:14:41.820702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.008 [2024-12-10 00:14:41.820881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.008 [2024-12-10 00:14:41.820890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.008 [2024-12-10 00:14:41.820896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.009 [2024-12-10 00:14:41.820902] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.009 [2024-12-10 00:14:41.833120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.009 [2024-12-10 00:14:41.833496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.009 [2024-12-10 00:14:41.833541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.009 [2024-12-10 00:14:41.833564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.009 [2024-12-10 00:14:41.834148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.009 [2024-12-10 00:14:41.834698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.009 [2024-12-10 00:14:41.834706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.009 [2024-12-10 00:14:41.834713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.009 [2024-12-10 00:14:41.834719] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.009 [2024-12-10 00:14:41.845958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.009 [2024-12-10 00:14:41.846376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.009 [2024-12-10 00:14:41.846392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.009 [2024-12-10 00:14:41.846399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.009 [2024-12-10 00:14:41.846563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.009 [2024-12-10 00:14:41.846727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.009 [2024-12-10 00:14:41.846735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.009 [2024-12-10 00:14:41.846746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.009 [2024-12-10 00:14:41.846752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.009 [2024-12-10 00:14:41.858912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.009 [2024-12-10 00:14:41.859248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.009 [2024-12-10 00:14:41.859264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.009 [2024-12-10 00:14:41.859271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.009 [2024-12-10 00:14:41.859435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.009 [2024-12-10 00:14:41.859599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.009 [2024-12-10 00:14:41.859607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.009 [2024-12-10 00:14:41.859613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.009 [2024-12-10 00:14:41.859619] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.009 [2024-12-10 00:14:41.871941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.009 [2024-12-10 00:14:41.872271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.009 [2024-12-10 00:14:41.872290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.009 [2024-12-10 00:14:41.872314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.009 [2024-12-10 00:14:41.872494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.009 [2024-12-10 00:14:41.872673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.009 [2024-12-10 00:14:41.872681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.009 [2024-12-10 00:14:41.872689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.009 [2024-12-10 00:14:41.872695] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.009 [2024-12-10 00:14:41.885137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.009 [2024-12-10 00:14:41.885507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.009 [2024-12-10 00:14:41.885553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.009 [2024-12-10 00:14:41.885576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.009 [2024-12-10 00:14:41.886174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.009 [2024-12-10 00:14:41.886678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.009 [2024-12-10 00:14:41.886686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.009 [2024-12-10 00:14:41.886692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.009 [2024-12-10 00:14:41.886698] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.009 [2024-12-10 00:14:41.898333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.009 [2024-12-10 00:14:41.898761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.009 [2024-12-10 00:14:41.898778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.009 [2024-12-10 00:14:41.898785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.009 [2024-12-10 00:14:41.898958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.009 [2024-12-10 00:14:41.899132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.009 [2024-12-10 00:14:41.899140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.009 [2024-12-10 00:14:41.899146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.009 [2024-12-10 00:14:41.899152] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.009 [2024-12-10 00:14:41.911277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.009 [2024-12-10 00:14:41.911699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.009 [2024-12-10 00:14:41.911716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.009 [2024-12-10 00:14:41.911723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.009 [2024-12-10 00:14:41.911895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.009 [2024-12-10 00:14:41.912068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.009 [2024-12-10 00:14:41.912076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.009 [2024-12-10 00:14:41.912083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.009 [2024-12-10 00:14:41.912089] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.009 [2024-12-10 00:14:41.924171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.009 [2024-12-10 00:14:41.924572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.009 [2024-12-10 00:14:41.924588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.009 [2024-12-10 00:14:41.924596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.009 [2024-12-10 00:14:41.924760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.009 [2024-12-10 00:14:41.924924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.009 [2024-12-10 00:14:41.924932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.009 [2024-12-10 00:14:41.924938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.009 [2024-12-10 00:14:41.924945] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.009 [2024-12-10 00:14:41.937262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.009 [2024-12-10 00:14:41.937617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.009 [2024-12-10 00:14:41.937635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.009 [2024-12-10 00:14:41.937646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.009 [2024-12-10 00:14:41.937825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.009 [2024-12-10 00:14:41.938004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.009 [2024-12-10 00:14:41.938012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.009 [2024-12-10 00:14:41.938018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.009 [2024-12-10 00:14:41.938025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.270 [2024-12-10 00:14:41.950228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.270 [2024-12-10 00:14:41.950649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.270 [2024-12-10 00:14:41.950666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.270 [2024-12-10 00:14:41.950673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.270 [2024-12-10 00:14:41.950846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.270 [2024-12-10 00:14:41.951019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.270 [2024-12-10 00:14:41.951027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.270 [2024-12-10 00:14:41.951033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.270 [2024-12-10 00:14:41.951039] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.270 [2024-12-10 00:14:41.963070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.270 [2024-12-10 00:14:41.963488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.270 [2024-12-10 00:14:41.963505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.270 [2024-12-10 00:14:41.963512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.270 [2024-12-10 00:14:41.963676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.270 [2024-12-10 00:14:41.963839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.270 [2024-12-10 00:14:41.963847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.270 [2024-12-10 00:14:41.963853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.270 [2024-12-10 00:14:41.963858] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.270 [2024-12-10 00:14:41.975960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.270 [2024-12-10 00:14:41.976290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.270 [2024-12-10 00:14:41.976306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.270 [2024-12-10 00:14:41.976313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.270 [2024-12-10 00:14:41.976476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.270 [2024-12-10 00:14:41.976642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.270 [2024-12-10 00:14:41.976650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.270 [2024-12-10 00:14:41.976656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.270 [2024-12-10 00:14:41.976662] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.270 [2024-12-10 00:14:41.988830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.270 [2024-12-10 00:14:41.989227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.270 [2024-12-10 00:14:41.989244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.270 [2024-12-10 00:14:41.989251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.270 [2024-12-10 00:14:41.989414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.270 [2024-12-10 00:14:41.989577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.270 [2024-12-10 00:14:41.989585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.270 [2024-12-10 00:14:41.989591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.270 [2024-12-10 00:14:41.989596] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.270 [2024-12-10 00:14:42.001705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.270 [2024-12-10 00:14:42.002138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.270 [2024-12-10 00:14:42.002154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.270 [2024-12-10 00:14:42.002166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.270 [2024-12-10 00:14:42.002330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.270 [2024-12-10 00:14:42.002493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.270 [2024-12-10 00:14:42.002501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.270 [2024-12-10 00:14:42.002507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.270 [2024-12-10 00:14:42.002512] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.270 [2024-12-10 00:14:42.014520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.270 [2024-12-10 00:14:42.014932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.270 [2024-12-10 00:14:42.014949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.270 [2024-12-10 00:14:42.014955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.270 [2024-12-10 00:14:42.015118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.270 [2024-12-10 00:14:42.015288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.271 [2024-12-10 00:14:42.015296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.271 [2024-12-10 00:14:42.015306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.271 [2024-12-10 00:14:42.015311] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.271 [2024-12-10 00:14:42.027466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.271 [2024-12-10 00:14:42.027886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.271 [2024-12-10 00:14:42.027902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.271 [2024-12-10 00:14:42.027909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.271 [2024-12-10 00:14:42.028072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.271 [2024-12-10 00:14:42.028239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.271 [2024-12-10 00:14:42.028248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.271 [2024-12-10 00:14:42.028254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.271 [2024-12-10 00:14:42.028260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.271 [2024-12-10 00:14:42.040416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.271 [2024-12-10 00:14:42.040829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.271 [2024-12-10 00:14:42.040845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.271 [2024-12-10 00:14:42.040852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.271 [2024-12-10 00:14:42.041015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.271 [2024-12-10 00:14:42.041183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.271 [2024-12-10 00:14:42.041191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.271 [2024-12-10 00:14:42.041197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.271 [2024-12-10 00:14:42.041203] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.271 [2024-12-10 00:14:42.053340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.271 [2024-12-10 00:14:42.053683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.271 [2024-12-10 00:14:42.053698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.271 [2024-12-10 00:14:42.053705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.271 [2024-12-10 00:14:42.053868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.271 [2024-12-10 00:14:42.054032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.271 [2024-12-10 00:14:42.054040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.271 [2024-12-10 00:14:42.054046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.271 [2024-12-10 00:14:42.054051] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.271 [2024-12-10 00:14:42.066210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.271 [2024-12-10 00:14:42.066617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.271 [2024-12-10 00:14:42.066660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.271 [2024-12-10 00:14:42.066682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.271 [2024-12-10 00:14:42.067169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.271 [2024-12-10 00:14:42.067334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.271 [2024-12-10 00:14:42.067342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.271 [2024-12-10 00:14:42.067347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.271 [2024-12-10 00:14:42.067353] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.271 [2024-12-10 00:14:42.079040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.271 [2024-12-10 00:14:42.079408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.271 [2024-12-10 00:14:42.079425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.271 [2024-12-10 00:14:42.079432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.271 [2024-12-10 00:14:42.079605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.271 [2024-12-10 00:14:42.079778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.271 [2024-12-10 00:14:42.079786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.271 [2024-12-10 00:14:42.079792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.271 [2024-12-10 00:14:42.079798] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.271 [2024-12-10 00:14:42.092166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.271 [2024-12-10 00:14:42.092550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.271 [2024-12-10 00:14:42.092567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.271 [2024-12-10 00:14:42.092574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.271 [2024-12-10 00:14:42.092747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.271 [2024-12-10 00:14:42.092921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.271 [2024-12-10 00:14:42.092929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.271 [2024-12-10 00:14:42.092935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.271 [2024-12-10 00:14:42.092942] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.271 [2024-12-10 00:14:42.105097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.271 [2024-12-10 00:14:42.105533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.271 [2024-12-10 00:14:42.105550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.271 [2024-12-10 00:14:42.105563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.271 [2024-12-10 00:14:42.105736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.271 [2024-12-10 00:14:42.105909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.271 [2024-12-10 00:14:42.105917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.271 [2024-12-10 00:14:42.105924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.271 [2024-12-10 00:14:42.105930] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.271 [2024-12-10 00:14:42.118018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.271 [2024-12-10 00:14:42.118426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.271 [2024-12-10 00:14:42.118470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.271 [2024-12-10 00:14:42.118493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.271 [2024-12-10 00:14:42.118970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.271 [2024-12-10 00:14:42.119135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.271 [2024-12-10 00:14:42.119143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.271 [2024-12-10 00:14:42.119149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.271 [2024-12-10 00:14:42.119154] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.271 [2024-12-10 00:14:42.130846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.271 [2024-12-10 00:14:42.131282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.271 [2024-12-10 00:14:42.131326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.271 [2024-12-10 00:14:42.131349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.271 [2024-12-10 00:14:42.131931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.271 [2024-12-10 00:14:42.132477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.271 [2024-12-10 00:14:42.132485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.271 [2024-12-10 00:14:42.132490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.271 [2024-12-10 00:14:42.132496] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.271 [2024-12-10 00:14:42.143718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.271 [2024-12-10 00:14:42.144057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.271 [2024-12-10 00:14:42.144073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.271 [2024-12-10 00:14:42.144079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.271 [2024-12-10 00:14:42.144249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.271 [2024-12-10 00:14:42.144416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.272 [2024-12-10 00:14:42.144424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.272 [2024-12-10 00:14:42.144430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.272 [2024-12-10 00:14:42.144436] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.272 [2024-12-10 00:14:42.156527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.272 [2024-12-10 00:14:42.156972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.272 [2024-12-10 00:14:42.156988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.272 [2024-12-10 00:14:42.156995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.272 [2024-12-10 00:14:42.157173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.272 [2024-12-10 00:14:42.157352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.272 [2024-12-10 00:14:42.157360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.272 [2024-12-10 00:14:42.157366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.272 [2024-12-10 00:14:42.157371] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.272 [2024-12-10 00:14:42.169471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.272 [2024-12-10 00:14:42.169927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.272 [2024-12-10 00:14:42.169943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.272 [2024-12-10 00:14:42.169950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.272 [2024-12-10 00:14:42.170112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.272 [2024-12-10 00:14:42.170280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.272 [2024-12-10 00:14:42.170289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.272 [2024-12-10 00:14:42.170295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.272 [2024-12-10 00:14:42.170300] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.272 [2024-12-10 00:14:42.182396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.272 [2024-12-10 00:14:42.182814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.272 [2024-12-10 00:14:42.182860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.272 [2024-12-10 00:14:42.182883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.272 [2024-12-10 00:14:42.183456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.272 [2024-12-10 00:14:42.183621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.272 [2024-12-10 00:14:42.183629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.272 [2024-12-10 00:14:42.183635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.272 [2024-12-10 00:14:42.183644] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.272 5683.60 IOPS, 22.20 MiB/s [2024-12-09T23:14:42.208Z] [2024-12-10 00:14:42.195306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.272 [2024-12-10 00:14:42.195605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.272 [2024-12-10 00:14:42.195621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.272 [2024-12-10 00:14:42.195628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.272 [2024-12-10 00:14:42.195792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.272 [2024-12-10 00:14:42.195955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.272 [2024-12-10 00:14:42.195963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.272 [2024-12-10 00:14:42.195969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.272 [2024-12-10 00:14:42.195975] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.533 [2024-12-10 00:14:42.208287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.533 [2024-12-10 00:14:42.208633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.533 [2024-12-10 00:14:42.208649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.533 [2024-12-10 00:14:42.208656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.533 [2024-12-10 00:14:42.208834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.533 [2024-12-10 00:14:42.209013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.533 [2024-12-10 00:14:42.209021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.533 [2024-12-10 00:14:42.209028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.533 [2024-12-10 00:14:42.209034] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.533 [2024-12-10 00:14:42.221193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.533 [2024-12-10 00:14:42.221620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.533 [2024-12-10 00:14:42.221663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.533 [2024-12-10 00:14:42.221685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.533 [2024-12-10 00:14:42.222260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.533 [2024-12-10 00:14:42.222424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.533 [2024-12-10 00:14:42.222432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.533 [2024-12-10 00:14:42.222438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.533 [2024-12-10 00:14:42.222444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.533 [2024-12-10 00:14:42.236060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.533 [2024-12-10 00:14:42.236506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.533 [2024-12-10 00:14:42.236528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.533 [2024-12-10 00:14:42.236538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.533 [2024-12-10 00:14:42.236793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.533 [2024-12-10 00:14:42.237048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.533 [2024-12-10 00:14:42.237059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.533 [2024-12-10 00:14:42.237068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.533 [2024-12-10 00:14:42.237076] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.533 [2024-12-10 00:14:42.249133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.533 [2024-12-10 00:14:42.249577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.533 [2024-12-10 00:14:42.249620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.533 [2024-12-10 00:14:42.249643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.533 [2024-12-10 00:14:42.250239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.533 [2024-12-10 00:14:42.250712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.533 [2024-12-10 00:14:42.250720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.533 [2024-12-10 00:14:42.250726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.533 [2024-12-10 00:14:42.250732] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.533 [2024-12-10 00:14:42.262017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.533 [2024-12-10 00:14:42.262443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.533 [2024-12-10 00:14:42.262459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.533 [2024-12-10 00:14:42.262466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.533 [2024-12-10 00:14:42.262629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.533 [2024-12-10 00:14:42.262793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.533 [2024-12-10 00:14:42.262801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.533 [2024-12-10 00:14:42.262807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.533 [2024-12-10 00:14:42.262812] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.533 [2024-12-10 00:14:42.274947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.533 [2024-12-10 00:14:42.275359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.533 [2024-12-10 00:14:42.275375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.533 [2024-12-10 00:14:42.275385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.533 [2024-12-10 00:14:42.275548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.533 [2024-12-10 00:14:42.275712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.533 [2024-12-10 00:14:42.275719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.533 [2024-12-10 00:14:42.275725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.533 [2024-12-10 00:14:42.275731] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.533 [2024-12-10 00:14:42.287900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.533 [2024-12-10 00:14:42.288237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.533 [2024-12-10 00:14:42.288255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.533 [2024-12-10 00:14:42.288262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.533 [2024-12-10 00:14:42.288427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.533 [2024-12-10 00:14:42.288590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.533 [2024-12-10 00:14:42.288598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.533 [2024-12-10 00:14:42.288604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.533 [2024-12-10 00:14:42.288610] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.533 [2024-12-10 00:14:42.300780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.533 [2024-12-10 00:14:42.301198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.533 [2024-12-10 00:14:42.301214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.534 [2024-12-10 00:14:42.301221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.534 [2024-12-10 00:14:42.301384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.534 [2024-12-10 00:14:42.301548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.534 [2024-12-10 00:14:42.301556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.534 [2024-12-10 00:14:42.301562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.534 [2024-12-10 00:14:42.301567] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.534 [2024-12-10 00:14:42.313692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.534 [2024-12-10 00:14:42.314098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.534 [2024-12-10 00:14:42.314114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.534 [2024-12-10 00:14:42.314120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.534 [2024-12-10 00:14:42.314290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.534 [2024-12-10 00:14:42.314457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.534 [2024-12-10 00:14:42.314465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.534 [2024-12-10 00:14:42.314471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.534 [2024-12-10 00:14:42.314476] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.534 [2024-12-10 00:14:42.326661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.534 [2024-12-10 00:14:42.327047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.534 [2024-12-10 00:14:42.327064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.534 [2024-12-10 00:14:42.327071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.534 [2024-12-10 00:14:42.327249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.534 [2024-12-10 00:14:42.327424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.534 [2024-12-10 00:14:42.327432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.534 [2024-12-10 00:14:42.327438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.534 [2024-12-10 00:14:42.327444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.534 [2024-12-10 00:14:42.339475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.534 [2024-12-10 00:14:42.339908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.534 [2024-12-10 00:14:42.339925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.534 [2024-12-10 00:14:42.339932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.534 [2024-12-10 00:14:42.340105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.534 [2024-12-10 00:14:42.340283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.534 [2024-12-10 00:14:42.340292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.534 [2024-12-10 00:14:42.340298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.534 [2024-12-10 00:14:42.340304] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.534 [2024-12-10 00:14:42.352518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.534 [2024-12-10 00:14:42.352922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.534 [2024-12-10 00:14:42.352939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.534 [2024-12-10 00:14:42.352946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.534 [2024-12-10 00:14:42.353124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.534 [2024-12-10 00:14:42.353308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.534 [2024-12-10 00:14:42.353318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.534 [2024-12-10 00:14:42.353324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.534 [2024-12-10 00:14:42.353334] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.534 [2024-12-10 00:14:42.365334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.534 [2024-12-10 00:14:42.365773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.534 [2024-12-10 00:14:42.365813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.534 [2024-12-10 00:14:42.365837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.534 [2024-12-10 00:14:42.366435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.534 [2024-12-10 00:14:42.366987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.534 [2024-12-10 00:14:42.366994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.534 [2024-12-10 00:14:42.367000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.534 [2024-12-10 00:14:42.367006] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.534 [2024-12-10 00:14:42.378236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.534 [2024-12-10 00:14:42.378633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.534 [2024-12-10 00:14:42.378676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.534 [2024-12-10 00:14:42.378698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.534 [2024-12-10 00:14:42.379113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.534 [2024-12-10 00:14:42.379283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.534 [2024-12-10 00:14:42.379292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.534 [2024-12-10 00:14:42.379298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.534 [2024-12-10 00:14:42.379303] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.534 [2024-12-10 00:14:42.391156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.534 [2024-12-10 00:14:42.391573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.534 [2024-12-10 00:14:42.391588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.534 [2024-12-10 00:14:42.391595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.534 [2024-12-10 00:14:42.391759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.534 [2024-12-10 00:14:42.391921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.534 [2024-12-10 00:14:42.391929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.534 [2024-12-10 00:14:42.391935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.534 [2024-12-10 00:14:42.391941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.534 [2024-12-10 00:14:42.404102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.534 [2024-12-10 00:14:42.404526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.534 [2024-12-10 00:14:42.404542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.534 [2024-12-10 00:14:42.404549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.534 [2024-12-10 00:14:42.404712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.534 [2024-12-10 00:14:42.404875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.534 [2024-12-10 00:14:42.404882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.534 [2024-12-10 00:14:42.404888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.534 [2024-12-10 00:14:42.404894] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.534 [2024-12-10 00:14:42.417045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.534 [2024-12-10 00:14:42.417480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.534 [2024-12-10 00:14:42.417524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.534 [2024-12-10 00:14:42.417546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.534 [2024-12-10 00:14:42.418033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.534 [2024-12-10 00:14:42.418203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.534 [2024-12-10 00:14:42.418211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.534 [2024-12-10 00:14:42.418217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.534 [2024-12-10 00:14:42.418223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.534 [2024-12-10 00:14:42.429910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.535 [2024-12-10 00:14:42.430327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.535 [2024-12-10 00:14:42.430344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.535 [2024-12-10 00:14:42.430351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.535 [2024-12-10 00:14:42.430513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.535 [2024-12-10 00:14:42.430676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.535 [2024-12-10 00:14:42.430684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.535 [2024-12-10 00:14:42.430689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.535 [2024-12-10 00:14:42.430695] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.535 [2024-12-10 00:14:42.442846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.535 [2024-12-10 00:14:42.443264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.535 [2024-12-10 00:14:42.443280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.535 [2024-12-10 00:14:42.443287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.535 [2024-12-10 00:14:42.443453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.535 [2024-12-10 00:14:42.443617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.535 [2024-12-10 00:14:42.443625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.535 [2024-12-10 00:14:42.443631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.535 [2024-12-10 00:14:42.443636] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.535 [2024-12-10 00:14:42.455659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.535 [2024-12-10 00:14:42.456071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.535 [2024-12-10 00:14:42.456087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.535 [2024-12-10 00:14:42.456094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.535 [2024-12-10 00:14:42.456262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.535 [2024-12-10 00:14:42.456425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.535 [2024-12-10 00:14:42.456432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.535 [2024-12-10 00:14:42.456438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.535 [2024-12-10 00:14:42.456444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.795 [2024-12-10 00:14:42.468696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.795 [2024-12-10 00:14:42.469087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.795 [2024-12-10 00:14:42.469103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.795 [2024-12-10 00:14:42.469110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.795 [2024-12-10 00:14:42.469280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.795 [2024-12-10 00:14:42.469443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.795 [2024-12-10 00:14:42.469451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.795 [2024-12-10 00:14:42.469457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.795 [2024-12-10 00:14:42.469462] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.795 [2024-12-10 00:14:42.481554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.795 [2024-12-10 00:14:42.481978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.795 [2024-12-10 00:14:42.482021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.795 [2024-12-10 00:14:42.482044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.795 [2024-12-10 00:14:42.482641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.795 [2024-12-10 00:14:42.483173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.795 [2024-12-10 00:14:42.483184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.795 [2024-12-10 00:14:42.483190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.796 [2024-12-10 00:14:42.483196] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.796 [2024-12-10 00:14:42.494430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.796 [2024-12-10 00:14:42.494850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.796 [2024-12-10 00:14:42.494866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.796 [2024-12-10 00:14:42.494873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.796 [2024-12-10 00:14:42.495035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.796 [2024-12-10 00:14:42.495205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.796 [2024-12-10 00:14:42.495213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.796 [2024-12-10 00:14:42.495219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.796 [2024-12-10 00:14:42.495225] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.796 [2024-12-10 00:14:42.507323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.796 [2024-12-10 00:14:42.507718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.796 [2024-12-10 00:14:42.507734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.796 [2024-12-10 00:14:42.507741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.796 [2024-12-10 00:14:42.507904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.796 [2024-12-10 00:14:42.508067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.796 [2024-12-10 00:14:42.508074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.796 [2024-12-10 00:14:42.508080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.796 [2024-12-10 00:14:42.508086] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.796 [2024-12-10 00:14:42.520194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.796 [2024-12-10 00:14:42.520616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.796 [2024-12-10 00:14:42.520632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.796 [2024-12-10 00:14:42.520639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.796 [2024-12-10 00:14:42.520802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.796 [2024-12-10 00:14:42.520965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.796 [2024-12-10 00:14:42.520972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.796 [2024-12-10 00:14:42.520978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.796 [2024-12-10 00:14:42.520987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.796 [2024-12-10 00:14:42.533141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.796 [2024-12-10 00:14:42.533472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.796 [2024-12-10 00:14:42.533489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.796 [2024-12-10 00:14:42.533496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.796 [2024-12-10 00:14:42.533658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.796 [2024-12-10 00:14:42.533821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.796 [2024-12-10 00:14:42.533829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.796 [2024-12-10 00:14:42.533835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.796 [2024-12-10 00:14:42.533841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.796 [2024-12-10 00:14:42.545995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.796 [2024-12-10 00:14:42.546411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.796 [2024-12-10 00:14:42.546427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.796 [2024-12-10 00:14:42.546434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.796 [2024-12-10 00:14:42.546598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.796 [2024-12-10 00:14:42.546761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.796 [2024-12-10 00:14:42.546768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.796 [2024-12-10 00:14:42.546774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.796 [2024-12-10 00:14:42.546780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.796 [2024-12-10 00:14:42.558936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.796 [2024-12-10 00:14:42.559351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.796 [2024-12-10 00:14:42.559367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.796 [2024-12-10 00:14:42.559374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.796 [2024-12-10 00:14:42.559537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.796 [2024-12-10 00:14:42.559699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.796 [2024-12-10 00:14:42.559707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.796 [2024-12-10 00:14:42.559713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.796 [2024-12-10 00:14:42.559719] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.796 [2024-12-10 00:14:42.571878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.796 [2024-12-10 00:14:42.572295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.796 [2024-12-10 00:14:42.572328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.796 [2024-12-10 00:14:42.572352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.796 [2024-12-10 00:14:42.572934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.796 [2024-12-10 00:14:42.573509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.796 [2024-12-10 00:14:42.573518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.796 [2024-12-10 00:14:42.573523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.796 [2024-12-10 00:14:42.573529] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.796 [2024-12-10 00:14:42.584699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.796 [2024-12-10 00:14:42.585126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.796 [2024-12-10 00:14:42.585180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.796 [2024-12-10 00:14:42.585204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.796 [2024-12-10 00:14:42.585786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.796 [2024-12-10 00:14:42.586325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.796 [2024-12-10 00:14:42.586333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.796 [2024-12-10 00:14:42.586339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.796 [2024-12-10 00:14:42.586345] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.796 [2024-12-10 00:14:42.599684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.796 [2024-12-10 00:14:42.600064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.796 [2024-12-10 00:14:42.600085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.796 [2024-12-10 00:14:42.600095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.796 [2024-12-10 00:14:42.600356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.796 [2024-12-10 00:14:42.600611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.796 [2024-12-10 00:14:42.600623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.796 [2024-12-10 00:14:42.600632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.796 [2024-12-10 00:14:42.600640] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.796 [2024-12-10 00:14:42.612790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.796 [2024-12-10 00:14:42.613212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.796 [2024-12-10 00:14:42.613257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.796 [2024-12-10 00:14:42.613280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.796 [2024-12-10 00:14:42.613871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.796 [2024-12-10 00:14:42.614477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.796 [2024-12-10 00:14:42.614506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.796 [2024-12-10 00:14:42.614512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.796 [2024-12-10 00:14:42.614518] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.797 [2024-12-10 00:14:42.625671] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.797 [2024-12-10 00:14:42.626080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.797 [2024-12-10 00:14:42.626097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.797 [2024-12-10 00:14:42.626104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.797 [2024-12-10 00:14:42.626282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.797 [2024-12-10 00:14:42.626456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.797 [2024-12-10 00:14:42.626464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.797 [2024-12-10 00:14:42.626471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.797 [2024-12-10 00:14:42.626477] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.797 [2024-12-10 00:14:42.638573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.797 [2024-12-10 00:14:42.638895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.797 [2024-12-10 00:14:42.638911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.797 [2024-12-10 00:14:42.638918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.797 [2024-12-10 00:14:42.639081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.797 [2024-12-10 00:14:42.639249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.797 [2024-12-10 00:14:42.639259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.797 [2024-12-10 00:14:42.639266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.797 [2024-12-10 00:14:42.639272] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.797 [2024-12-10 00:14:42.651577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.797 [2024-12-10 00:14:42.651845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.797 [2024-12-10 00:14:42.651860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.797 [2024-12-10 00:14:42.651866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.797 [2024-12-10 00:14:42.652030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.797 [2024-12-10 00:14:42.652198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.797 [2024-12-10 00:14:42.652211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.797 [2024-12-10 00:14:42.652217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.797 [2024-12-10 00:14:42.652223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.797 [2024-12-10 00:14:42.664788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.797 [2024-12-10 00:14:42.665126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.797 [2024-12-10 00:14:42.665142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.797 [2024-12-10 00:14:42.665150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.797 [2024-12-10 00:14:42.665333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.797 [2024-12-10 00:14:42.665511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.797 [2024-12-10 00:14:42.665520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.797 [2024-12-10 00:14:42.665527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.797 [2024-12-10 00:14:42.665533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.797 [2024-12-10 00:14:42.677645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.797 [2024-12-10 00:14:42.678037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.797 [2024-12-10 00:14:42.678053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.797 [2024-12-10 00:14:42.678059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.797 [2024-12-10 00:14:42.678228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.797 [2024-12-10 00:14:42.678391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.797 [2024-12-10 00:14:42.678399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.797 [2024-12-10 00:14:42.678405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.797 [2024-12-10 00:14:42.678411] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.797 [2024-12-10 00:14:42.690497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.797 [2024-12-10 00:14:42.690823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.797 [2024-12-10 00:14:42.690839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.797 [2024-12-10 00:14:42.690846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.797 [2024-12-10 00:14:42.691009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.797 [2024-12-10 00:14:42.691178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.797 [2024-12-10 00:14:42.691187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.797 [2024-12-10 00:14:42.691193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.797 [2024-12-10 00:14:42.691202] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.797 [2024-12-10 00:14:42.703342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.797 [2024-12-10 00:14:42.703666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.797 [2024-12-10 00:14:42.703682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.797 [2024-12-10 00:14:42.703689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.797 [2024-12-10 00:14:42.703852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.797 [2024-12-10 00:14:42.704016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.797 [2024-12-10 00:14:42.704024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.797 [2024-12-10 00:14:42.704029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.797 [2024-12-10 00:14:42.704035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:07.797 [2024-12-10 00:14:42.716426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:07.797 [2024-12-10 00:14:42.716750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:07.797 [2024-12-10 00:14:42.716766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:07.797 [2024-12-10 00:14:42.716773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:07.797 [2024-12-10 00:14:42.716937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:07.797 [2024-12-10 00:14:42.717101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:07.797 [2024-12-10 00:14:42.717109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:07.797 [2024-12-10 00:14:42.717115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:07.797 [2024-12-10 00:14:42.717121] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.058 [2024-12-10 00:14:42.729564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.058 [2024-12-10 00:14:42.729905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.058 [2024-12-10 00:14:42.729922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.058 [2024-12-10 00:14:42.729929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.058 [2024-12-10 00:14:42.730113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.058 [2024-12-10 00:14:42.730295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.058 [2024-12-10 00:14:42.730304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.058 [2024-12-10 00:14:42.730310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.058 [2024-12-10 00:14:42.730316] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.058 [2024-12-10 00:14:42.742472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.058 [2024-12-10 00:14:42.742829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.058 [2024-12-10 00:14:42.742850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.058 [2024-12-10 00:14:42.742857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.058 [2024-12-10 00:14:42.743020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.058 [2024-12-10 00:14:42.743189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.058 [2024-12-10 00:14:42.743197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.058 [2024-12-10 00:14:42.743203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.058 [2024-12-10 00:14:42.743209] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh: line 35: 524119 Killed "${NVMF_APP[@]}" "$@" 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.058 [2024-12-10 00:14:42.755643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=525368 00:33:08.058 [2024-12-10 00:14:42.756071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.058 [2024-12-10 00:14:42.756088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.058 [2024-12-10 00:14:42.756095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.058 [2024-12-10 00:14:42.756280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 525368 00:33:08.058 [2024-12-10 00:14:42.756459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.058 [2024-12-10 00:14:42.756468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.058 [2024-12-10 00:14:42.756475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.058 [2024-12-10 00:14:42.756481] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 525368 ']' 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:08.058 00:14:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.058 [2024-12-10 00:14:42.768777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.058 [2024-12-10 00:14:42.769138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.058 [2024-12-10 00:14:42.769154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.058 [2024-12-10 00:14:42.769173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.058 [2024-12-10 00:14:42.769351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.058 [2024-12-10 00:14:42.769530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.058 [2024-12-10 00:14:42.769539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.058 [2024-12-10 00:14:42.769545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.058 [2024-12-10 00:14:42.769551] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.058 [2024-12-10 00:14:42.782007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.058 [2024-12-10 00:14:42.782353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.058 [2024-12-10 00:14:42.782370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.058 [2024-12-10 00:14:42.782377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.059 [2024-12-10 00:14:42.782555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.059 [2024-12-10 00:14:42.782732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.059 [2024-12-10 00:14:42.782741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.059 [2024-12-10 00:14:42.782747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.059 [2024-12-10 00:14:42.782753] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.059 [2024-12-10 00:14:42.795222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.059 [2024-12-10 00:14:42.795634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.059 [2024-12-10 00:14:42.795651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.059 [2024-12-10 00:14:42.795658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.059 [2024-12-10 00:14:42.795836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.059 [2024-12-10 00:14:42.796015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.059 [2024-12-10 00:14:42.796023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.059 [2024-12-10 00:14:42.796029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.059 [2024-12-10 00:14:42.796035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.059 [2024-12-10 00:14:42.803415] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:33:08.059 [2024-12-10 00:14:42.803454] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:08.059 [2024-12-10 00:14:42.808200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.059 [2024-12-10 00:14:42.808479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.059 [2024-12-10 00:14:42.808495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.059 [2024-12-10 00:14:42.808502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.059 [2024-12-10 00:14:42.808675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.059 [2024-12-10 00:14:42.808850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.059 [2024-12-10 00:14:42.808859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.059 [2024-12-10 00:14:42.808865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.059 [2024-12-10 00:14:42.808871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.059 [2024-12-10 00:14:42.821206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.059 [2024-12-10 00:14:42.821492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.059 [2024-12-10 00:14:42.821509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.059 [2024-12-10 00:14:42.821517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.059 [2024-12-10 00:14:42.821690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.059 [2024-12-10 00:14:42.821864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.059 [2024-12-10 00:14:42.821873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.059 [2024-12-10 00:14:42.821879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.059 [2024-12-10 00:14:42.821885] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.059 [2024-12-10 00:14:42.834311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.059 [2024-12-10 00:14:42.834646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.059 [2024-12-10 00:14:42.834663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.059 [2024-12-10 00:14:42.834670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.059 [2024-12-10 00:14:42.834849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.059 [2024-12-10 00:14:42.835027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.059 [2024-12-10 00:14:42.835034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.059 [2024-12-10 00:14:42.835041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.059 [2024-12-10 00:14:42.835047] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.059 [2024-12-10 00:14:42.847488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.059 [2024-12-10 00:14:42.847935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.059 [2024-12-10 00:14:42.847951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.059 [2024-12-10 00:14:42.847962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.059 [2024-12-10 00:14:42.848141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.059 [2024-12-10 00:14:42.848326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.059 [2024-12-10 00:14:42.848335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.059 [2024-12-10 00:14:42.848341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.059 [2024-12-10 00:14:42.848347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.059 [2024-12-10 00:14:42.860465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.059 [2024-12-10 00:14:42.860868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.059 [2024-12-10 00:14:42.860885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.059 [2024-12-10 00:14:42.860892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.059 [2024-12-10 00:14:42.861070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.059 [2024-12-10 00:14:42.861254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.059 [2024-12-10 00:14:42.861263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.059 [2024-12-10 00:14:42.861270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.059 [2024-12-10 00:14:42.861276] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.059 [2024-12-10 00:14:42.873554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.059 [2024-12-10 00:14:42.873900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.059 [2024-12-10 00:14:42.873917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.059 [2024-12-10 00:14:42.873925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.059 [2024-12-10 00:14:42.874102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.059 [2024-12-10 00:14:42.874287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.059 [2024-12-10 00:14:42.874296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.059 [2024-12-10 00:14:42.874302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.059 [2024-12-10 00:14:42.874308] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.059 [2024-12-10 00:14:42.884870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:08.059 [2024-12-10 00:14:42.886774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.059 [2024-12-10 00:14:42.887127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.059 [2024-12-10 00:14:42.887143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.059 [2024-12-10 00:14:42.887151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.059 [2024-12-10 00:14:42.887337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.059 [2024-12-10 00:14:42.887518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.059 [2024-12-10 00:14:42.887527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.059 [2024-12-10 00:14:42.887534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.059 [2024-12-10 00:14:42.887541] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.059 [2024-12-10 00:14:42.899844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.059 [2024-12-10 00:14:42.900256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.059 [2024-12-10 00:14:42.900276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.059 [2024-12-10 00:14:42.900285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.059 [2024-12-10 00:14:42.900463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.059 [2024-12-10 00:14:42.900643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.059 [2024-12-10 00:14:42.900651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.059 [2024-12-10 00:14:42.900659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.059 [2024-12-10 00:14:42.900665] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.059 [2024-12-10 00:14:42.912955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.059 [2024-12-10 00:14:42.913253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.060 [2024-12-10 00:14:42.913270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.060 [2024-12-10 00:14:42.913278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.060 [2024-12-10 00:14:42.913457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.060 [2024-12-10 00:14:42.913636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.060 [2024-12-10 00:14:42.913645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.060 [2024-12-10 00:14:42.913652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.060 [2024-12-10 00:14:42.913658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.060 [2024-12-10 00:14:42.926039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.060 [2024-12-10 00:14:42.926413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.060 [2024-12-10 00:14:42.926431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.060 [2024-12-10 00:14:42.926439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.060 [2024-12-10 00:14:42.926618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.060 [2024-12-10 00:14:42.926797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.060 [2024-12-10 00:14:42.926806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.060 [2024-12-10 00:14:42.926818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.060 [2024-12-10 00:14:42.926824] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.060 [2024-12-10 00:14:42.927035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:08.060 [2024-12-10 00:14:42.927059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:08.060 [2024-12-10 00:14:42.927070] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:08.060 [2024-12-10 00:14:42.927079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:08.060 [2024-12-10 00:14:42.927088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:08.060 [2024-12-10 00:14:42.928432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:08.060 [2024-12-10 00:14:42.928538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.060 [2024-12-10 00:14:42.928539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:08.060 [2024-12-10 00:14:42.939125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.060 [2024-12-10 00:14:42.939514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.060 [2024-12-10 00:14:42.939533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.060 [2024-12-10 00:14:42.939542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.060 [2024-12-10 00:14:42.939723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.060 [2024-12-10 00:14:42.939903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.060 [2024-12-10 00:14:42.939912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.060 [2024-12-10 00:14:42.939919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.060 [2024-12-10 00:14:42.939926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.060 [2024-12-10 00:14:42.952218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.060 [2024-12-10 00:14:42.952626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.060 [2024-12-10 00:14:42.952646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.060 [2024-12-10 00:14:42.952655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.060 [2024-12-10 00:14:42.952835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.060 [2024-12-10 00:14:42.953015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.060 [2024-12-10 00:14:42.953024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.060 [2024-12-10 00:14:42.953031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.060 [2024-12-10 00:14:42.953038] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.060 [2024-12-10 00:14:42.965323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.060 [2024-12-10 00:14:42.965730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.060 [2024-12-10 00:14:42.965750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.060 [2024-12-10 00:14:42.965765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.060 [2024-12-10 00:14:42.965944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.060 [2024-12-10 00:14:42.966124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.060 [2024-12-10 00:14:42.966133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.060 [2024-12-10 00:14:42.966140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.060 [2024-12-10 00:14:42.966147] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.060 [2024-12-10 00:14:42.978436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.060 [2024-12-10 00:14:42.978753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.060 [2024-12-10 00:14:42.978772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.060 [2024-12-10 00:14:42.978781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.060 [2024-12-10 00:14:42.978960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.060 [2024-12-10 00:14:42.979140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.060 [2024-12-10 00:14:42.979148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.060 [2024-12-10 00:14:42.979156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.060 [2024-12-10 00:14:42.979171] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.320 [2024-12-10 00:14:42.991631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.320 [2024-12-10 00:14:42.992077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.320 [2024-12-10 00:14:42.992097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.320 [2024-12-10 00:14:42.992105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.320 [2024-12-10 00:14:42.992290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.320 [2024-12-10 00:14:42.992471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.320 [2024-12-10 00:14:42.992479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.320 [2024-12-10 00:14:42.992486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.320 [2024-12-10 00:14:42.992493] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.320 [2024-12-10 00:14:43.004788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.320 [2024-12-10 00:14:43.005140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.320 [2024-12-10 00:14:43.005162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.320 [2024-12-10 00:14:43.005171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.320 [2024-12-10 00:14:43.005350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.320 [2024-12-10 00:14:43.005534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.320 [2024-12-10 00:14:43.005542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.320 [2024-12-10 00:14:43.005549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.320 [2024-12-10 00:14:43.005556] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.320 [2024-12-10 00:14:43.017987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.320 [2024-12-10 00:14:43.018400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.320 [2024-12-10 00:14:43.018417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.320 [2024-12-10 00:14:43.018424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.320 [2024-12-10 00:14:43.018603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.320 [2024-12-10 00:14:43.018782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.320 [2024-12-10 00:14:43.018791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.320 [2024-12-10 00:14:43.018798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.320 [2024-12-10 00:14:43.018805] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.320 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:08.320 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:33:08.320 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:08.320 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:08.320 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.320 [2024-12-10 00:14:43.031079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.320 [2024-12-10 00:14:43.031493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.320 [2024-12-10 00:14:43.031510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.320 [2024-12-10 00:14:43.031518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.320 [2024-12-10 00:14:43.031695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.320 [2024-12-10 00:14:43.031874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.320 [2024-12-10 00:14:43.031882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.320 [2024-12-10 00:14:43.031889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.320 [2024-12-10 00:14:43.031895] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.320 [2024-12-10 00:14:43.044170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.320 [2024-12-10 00:14:43.044590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.320 [2024-12-10 00:14:43.044606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.320 [2024-12-10 00:14:43.044614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.320 [2024-12-10 00:14:43.044795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.320 [2024-12-10 00:14:43.044976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.320 [2024-12-10 00:14:43.044984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.320 [2024-12-10 00:14:43.044991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.320 [2024-12-10 00:14:43.044997] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.320 [2024-12-10 00:14:43.057265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.320 [2024-12-10 00:14:43.057559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.320 [2024-12-10 00:14:43.057575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.320 [2024-12-10 00:14:43.057583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.320 [2024-12-10 00:14:43.057761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.320 [2024-12-10 00:14:43.057940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.320 [2024-12-10 00:14:43.057948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.320 [2024-12-10 00:14:43.057956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.321 [2024-12-10 00:14:43.057961] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.321 [2024-12-10 00:14:43.066582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:08.321 [2024-12-10 00:14:43.070396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.321 [2024-12-10 00:14:43.070811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.321 [2024-12-10 00:14:43.070826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.321 [2024-12-10 00:14:43.070834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.321 [2024-12-10 00:14:43.071012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.321 [2024-12-10 00:14:43.071194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.321 [2024-12-10 00:14:43.071203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.321 [2024-12-10 00:14:43.071209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.321 [2024-12-10 00:14:43.071215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.321 [2024-12-10 00:14:43.083472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.321 [2024-12-10 00:14:43.083888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.321 [2024-12-10 00:14:43.083904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.321 [2024-12-10 00:14:43.083912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.321 [2024-12-10 00:14:43.084090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.321 [2024-12-10 00:14:43.084274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.321 [2024-12-10 00:14:43.084283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.321 [2024-12-10 00:14:43.084289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.321 [2024-12-10 00:14:43.084295] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.321 [2024-12-10 00:14:43.096573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.321 [2024-12-10 00:14:43.096995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.321 [2024-12-10 00:14:43.097012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.321 [2024-12-10 00:14:43.097019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.321 [2024-12-10 00:14:43.097203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.321 [2024-12-10 00:14:43.097382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.321 [2024-12-10 00:14:43.097390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.321 [2024-12-10 00:14:43.097396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.321 [2024-12-10 00:14:43.097403] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.321 Malloc0 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.321 [2024-12-10 00:14:43.109659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.321 [2024-12-10 00:14:43.110044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.321 [2024-12-10 00:14:43.110061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.321 [2024-12-10 00:14:43.110069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.321 [2024-12-10 00:14:43.110252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.321 [2024-12-10 00:14:43.110430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.321 [2024-12-10 00:14:43.110438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.321 [2024-12-10 00:14:43.110445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.321 [2024-12-10 00:14:43.110456] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.321 [2024-12-10 00:14:43.122886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.321 [2024-12-10 00:14:43.123300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.321 [2024-12-10 00:14:43.123317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb7120 with addr=10.0.0.2, port=4420 00:33:08.321 [2024-12-10 00:14:43.123325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb7120 is same with the state(6) to be set 00:33:08.321 [2024-12-10 00:14:43.123503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7120 (9): Bad file descriptor 00:33:08.321 [2024-12-10 00:14:43.123681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:33:08.321 [2024-12-10 00:14:43.123689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:33:08.321 [2024-12-10 00:14:43.123696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:33:08.321 [2024-12-10 00:14:43.123702] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:08.321 [2024-12-10 00:14:43.128643] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.321 00:14:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 524382 00:33:08.321 [2024-12-10 00:14:43.135954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:33:08.321 [2024-12-10 00:14:43.165580] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:33:09.697 4768.17 IOPS, 18.63 MiB/s [2024-12-09T23:14:45.569Z] 5672.00 IOPS, 22.16 MiB/s [2024-12-09T23:14:46.502Z] 6388.00 IOPS, 24.95 MiB/s [2024-12-09T23:14:47.438Z] 6923.78 IOPS, 27.05 MiB/s [2024-12-09T23:14:48.373Z] 7359.10 IOPS, 28.75 MiB/s [2024-12-09T23:14:49.309Z] 7706.91 IOPS, 30.11 MiB/s [2024-12-09T23:14:50.244Z] 7971.92 IOPS, 31.14 MiB/s [2024-12-09T23:14:51.620Z] 8206.62 IOPS, 32.06 MiB/s [2024-12-09T23:14:52.557Z] 8423.86 IOPS, 32.91 MiB/s [2024-12-09T23:14:52.557Z] 8602.60 IOPS, 33.60 MiB/s 00:33:17.621 Latency(us) 00:33:17.621 [2024-12-09T23:14:52.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.621 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:17.621 Verification LBA range: start 0x0 length 0x4000 00:33:17.621 Nvme1n1 : 15.01 8606.63 33.62 10846.66 0.00 6559.88 669.61 28151.99 00:33:17.621 [2024-12-09T23:14:52.557Z] =================================================================================================================== 00:33:17.621 [2024-12-09T23:14:52.557Z] Total : 8606.63 33.62 10846.66 0.00 6559.88 669.61 28151.99 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:17.621 rmmod nvme_tcp 00:33:17.621 rmmod nvme_fabrics 00:33:17.621 rmmod nvme_keyring 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 525368 ']' 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 525368 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 525368 ']' 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 525368 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 525368 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 525368' 00:33:17.621 killing process with pid 525368 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 525368 00:33:17.621 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 525368 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.881 00:14:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:20.414 00:33:20.414 real 0m25.973s 00:33:20.414 user 1m0.595s 00:33:20.414 sys 0m6.741s 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:20.414 ************************************ 00:33:20.414 END TEST nvmf_bdevperf 00:33:20.414 ************************************ 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.414 ************************************ 00:33:20.414 START TEST nvmf_target_disconnect 00:33:20.414 ************************************ 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:20.414 * Looking for test storage... 00:33:20.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:20.414 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:20.415 00:14:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:20.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.415 --rc genhtml_branch_coverage=1 00:33:20.415 --rc genhtml_function_coverage=1 00:33:20.415 --rc genhtml_legend=1 00:33:20.415 --rc geninfo_all_blocks=1 00:33:20.415 --rc geninfo_unexecuted_blocks=1 00:33:20.415 00:33:20.415 ' 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:20.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.415 --rc genhtml_branch_coverage=1 00:33:20.415 --rc genhtml_function_coverage=1 00:33:20.415 --rc genhtml_legend=1 00:33:20.415 --rc geninfo_all_blocks=1 00:33:20.415 --rc geninfo_unexecuted_blocks=1 00:33:20.415 00:33:20.415 ' 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:20.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.415 --rc genhtml_branch_coverage=1 00:33:20.415 --rc genhtml_function_coverage=1 00:33:20.415 --rc genhtml_legend=1 00:33:20.415 --rc geninfo_all_blocks=1 00:33:20.415 --rc geninfo_unexecuted_blocks=1 00:33:20.415 00:33:20.415 ' 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:20.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.415 --rc genhtml_branch_coverage=1 00:33:20.415 --rc genhtml_function_coverage=1 00:33:20.415 --rc genhtml_legend=1 00:33:20.415 --rc geninfo_all_blocks=1 00:33:20.415 --rc geninfo_unexecuted_blocks=1 00:33:20.415 00:33:20.415 ' 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:20.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:33:20.415 00:14:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:25.691 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:25.691 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.691 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:25.692 Found net devices under 0000:86:00.0: cvl_0_0 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:25.692 Found net devices under 0000:86:00.1: cvl_0_1 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.692 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:33:25.951 00:33:25.951 --- 10.0.0.2 ping statistics --- 00:33:25.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.951 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:33:25.951 00:33:25.951 --- 10.0.0.1 ping statistics --- 00:33:25.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.951 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:25.951 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:26.211 ************************************ 00:33:26.211 START TEST nvmf_target_disconnect_tc1 00:33:26.211 ************************************ 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect ]] 00:33:26.211 00:15:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:26.211 [2024-12-10 00:15:01.028721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:26.211 [2024-12-10 00:15:01.028830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab9ac0 with addr=10.0.0.2, port=4420 00:33:26.211 [2024-12-10 00:15:01.028883] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:26.211 [2024-12-10 00:15:01.028919] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:26.211 [2024-12-10 00:15:01.028939] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:33:26.211 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:26.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect: errors occurred 00:33:26.211 Initializing NVMe Controllers 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:26.211 00:33:26.211 real 0m0.117s 00:33:26.211 user 0m0.050s 00:33:26.211 sys 0m0.066s 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:26.211 ************************************ 00:33:26.211 END TEST nvmf_target_disconnect_tc1 00:33:26.211 ************************************ 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:26.211 ************************************ 00:33:26.211 START TEST nvmf_target_disconnect_tc2 00:33:26.211 ************************************ 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=530537 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 530537 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 530537 ']' 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.211 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.212 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.471 [2024-12-10 00:15:01.167527] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:33:26.471 [2024-12-10 00:15:01.167569] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.471 [2024-12-10 00:15:01.247922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:26.471 [2024-12-10 00:15:01.289108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.471 [2024-12-10 00:15:01.289147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.471 [2024-12-10 00:15:01.289155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.471 [2024-12-10 00:15:01.289166] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.471 [2024-12-10 00:15:01.289171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.471 [2024-12-10 00:15:01.290845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:26.471 [2024-12-10 00:15:01.290880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:26.471 [2024-12-10 00:15:01.290901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:26.471 [2024-12-10 00:15:01.290904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:26.471 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.471 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:33:26.471 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:26.471 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.471 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.730 Malloc0 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.730 [2024-12-10 00:15:01.461474] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.730 [2024-12-10 00:15:01.489696] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=530644 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:26.730 00:15:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:28.639 00:15:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 530537 00:33:28.639 00:15:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Write completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Write completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Write completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Write completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Write completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Write completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Write completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Write completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Write completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Write completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Write completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.639 Read completed with error (sct=0, sc=8) 00:33:28.639 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 [2024-12-10 00:15:03.527567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 [2024-12-10 00:15:03.527769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 [2024-12-10 00:15:03.527968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Write completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 Read completed with error (sct=0, sc=8) 00:33:28.640 starting I/O failed 00:33:28.640 [2024-12-10 00:15:03.528183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:28.640 [2024-12-10 00:15:03.528318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.640 [2024-12-10 00:15:03.528340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.640 qpair failed and we were unable to recover it. 00:33:28.640 [2024-12-10 00:15:03.528514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.528568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.528709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.528743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.528860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.528890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.529075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.529106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.529234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.529264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.529385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.529415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.529550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.529559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.529631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.529641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.529716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.529726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.529802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.529812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.530081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.530112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.530270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.530302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.530423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.530453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.530572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.530599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.530689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.530699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.530823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.530832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.530954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.530964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.531035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.531045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.531124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.531134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.531278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.531289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.531453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.531463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.531523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.531534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.531664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.531673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.531729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.531740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.531901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.531910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.532051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.532082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.532258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.532291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.532471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.532502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.532672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.532682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.532817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.532827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.532895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.532905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.532960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.532970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.533036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.533046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.533105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.533114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.533252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.641 [2024-12-10 00:15:03.533261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.641 qpair failed and we were unable to recover it. 00:33:28.641 [2024-12-10 00:15:03.533337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.533347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.533470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.533480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.533618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.533628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.533769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.533798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.533909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.533938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.534102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.534169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.534302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.534314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.534469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.534479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.534625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.534657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.534827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.534857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.534964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.534996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.535169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.535202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.535369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.535380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.535515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.535525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.535579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.535588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.535646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.535656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.535730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.535740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.535821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.535830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.535950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.535960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.536097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.536108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.536191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.536202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.536336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.536346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.536414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.536424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.536492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.536502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.536684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.536715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.536826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.536858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.536979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.537010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.537131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.537186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.537357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.537389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.537510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.537541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.537703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.537713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.537831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.537841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.537972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.537983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.538054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.538064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.538199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.538210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.538366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.538377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.538429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.538439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.538495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.538505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.642 [2024-12-10 00:15:03.538582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.642 [2024-12-10 00:15:03.538592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.642 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.538679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.538688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.538822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.538832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.538889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.538899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.539029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.539040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.539099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.539108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.539231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.539242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.539365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.539377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.539467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.539477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.539546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.539556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.539640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.539650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.539713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.539723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.539793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.539803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.539872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.539882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.539949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.539960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.540083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.540093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.540220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.540231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.540289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.540298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.540355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.540365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.540582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.540592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.540669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.540679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.540754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.540764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.540830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.540840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.540913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.540923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.540977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.540987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.541047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.541056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.541186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.541196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.541334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.541349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.541412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.541425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.541495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.541508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.541647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.541660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.541795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.541809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.541909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.541922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.542060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.542073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.542164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.542178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.542247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.542260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.542342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.542356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.542474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.542505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.542689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.542719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.643 [2024-12-10 00:15:03.542926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.643 [2024-12-10 00:15:03.542957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.643 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.543075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.543107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.543287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.543319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.543430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.543461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.543643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.543676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.543790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.543803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.543869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.543881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.544035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.544048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.544121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.544137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.544281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.544295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.544449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.544462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.544542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.544556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.544707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.544721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.544878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.544910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.545030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.545061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.545168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.545199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.545323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.545353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.545508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.545521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.545601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.545614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.545680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.545694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.545845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.545858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.545943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.545957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.546112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.546126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.546268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.546283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.546359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.546371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.546526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.546540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.546614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.546627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.546706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.546720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.546848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.546861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.546990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.547005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.547173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.547206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.547329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.547360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.547469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.547500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.547602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.644 [2024-12-10 00:15:03.547632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.644 qpair failed and we were unable to recover it. 00:33:28.644 [2024-12-10 00:15:03.547810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.547824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.547894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.547907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.548051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.548065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.548131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.548144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.548210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.548224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.548295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.548307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.548381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.548394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.548487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.548500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.548565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.548578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.548721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.548751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.548867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.548898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.548999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.549031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.549246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.549277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.549464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.549477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.549554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.549571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.549708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.549721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.549864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.549878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.549942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.549955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.550030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.550043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.550132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.550171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.550284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.550314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.550485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.550516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.550680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.550710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.550824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.550838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.551087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.551101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.551249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.551262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.551336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.551354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.551529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.551546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.551620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.551636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.551710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.551726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.551799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.551816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.551952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.551969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.552126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.552165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.552268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.552304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.552404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.552435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.552542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.552573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.552690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.552733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.552910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.552927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.553063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.645 [2024-12-10 00:15:03.553080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.645 qpair failed and we were unable to recover it. 00:33:28.645 [2024-12-10 00:15:03.553168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.553186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.553322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.553339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.553507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.553523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.553673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.553706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.553808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.553838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.553954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.553986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.554086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.554116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.554324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.554370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.554531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.554548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.554636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.554671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.554770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.554800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.554902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.554934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.555066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.555096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.555299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.555331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.555502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.555518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.555601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.555621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.555756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.555773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.555906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.555923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.556075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.556092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.556177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.556194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.556276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.556293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.556433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.556450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.556585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.556602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.556672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.556688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.556786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.556803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.556885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.556901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.557090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.557107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.557183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.557212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.557299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.557315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.557461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.557478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.557550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.557568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.557646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.557662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.557726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.557743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.557823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.557840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.557910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.557926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.558000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.558016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.558112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.558130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.558300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.558319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.558471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.558488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.646 qpair failed and we were unable to recover it. 00:33:28.646 [2024-12-10 00:15:03.558569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.646 [2024-12-10 00:15:03.558586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.558726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.558743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.558822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.558839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.558911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.558928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.559011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.559027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.559261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.559294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.559480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.559511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.559698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.559730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.559929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.559961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.560134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.560175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.560369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.560399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.560517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.560548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.560799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.560839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.560985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.561002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.561167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.561185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.561271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.561288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.561380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.561401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.561506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.561523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.561604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.561620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.561756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.561772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.561905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.561925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.562012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.562031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.562173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.562195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.562279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.562298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.562393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.562413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.562492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.562511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.562610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.562629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.562706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.562725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.562886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.562906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.563051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.563071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.563170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.563191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.563289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.563309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.563546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.563566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.563646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.563667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.563745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.563765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.563925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.563945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.564046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.564065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.564249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.564269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.564359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.564379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.647 qpair failed and we were unable to recover it. 00:33:28.647 [2024-12-10 00:15:03.564455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.647 [2024-12-10 00:15:03.564475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-12-10 00:15:03.564576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-12-10 00:15:03.564595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-12-10 00:15:03.564689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-12-10 00:15:03.564709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-12-10 00:15:03.564799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-12-10 00:15:03.564819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-12-10 00:15:03.564967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-12-10 00:15:03.564987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-12-10 00:15:03.565129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-12-10 00:15:03.565149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-12-10 00:15:03.565260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-12-10 00:15:03.565279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-12-10 00:15:03.565371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-12-10 00:15:03.565390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.648 [2024-12-10 00:15:03.565532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.648 [2024-12-10 00:15:03.565552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.648 qpair failed and we were unable to recover it. 00:33:28.934 [2024-12-10 00:15:03.565694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.934 [2024-12-10 00:15:03.565713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.934 qpair failed and we were unable to recover it. 00:33:28.934 [2024-12-10 00:15:03.565789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.934 [2024-12-10 00:15:03.565808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.934 qpair failed and we were unable to recover it. 00:33:28.934 [2024-12-10 00:15:03.566023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.934 [2024-12-10 00:15:03.566042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.934 qpair failed and we were unable to recover it. 00:33:28.934 [2024-12-10 00:15:03.566210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.934 [2024-12-10 00:15:03.566230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.934 qpair failed and we were unable to recover it. 00:33:28.934 [2024-12-10 00:15:03.566319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.934 [2024-12-10 00:15:03.566340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.934 qpair failed and we were unable to recover it. 00:33:28.934 [2024-12-10 00:15:03.566497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.934 [2024-12-10 00:15:03.566516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.934 qpair failed and we were unable to recover it. 00:33:28.934 [2024-12-10 00:15:03.566677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.934 [2024-12-10 00:15:03.566697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.934 qpair failed and we were unable to recover it. 00:33:28.934 [2024-12-10 00:15:03.566839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.934 [2024-12-10 00:15:03.566860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.934 qpair failed and we were unable to recover it. 00:33:28.934 [2024-12-10 00:15:03.567011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.934 [2024-12-10 00:15:03.567035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.934 qpair failed and we were unable to recover it. 00:33:28.934 [2024-12-10 00:15:03.567118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.934 [2024-12-10 00:15:03.567138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.934 qpair failed and we were unable to recover it. 00:33:28.934 [2024-12-10 00:15:03.567287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.934 [2024-12-10 00:15:03.567307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.934 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.567384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.567404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.567663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.567683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.567760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.567779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.567861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.567880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.567956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.567975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.568120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.568139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.568339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.568371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.568474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.568504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.568619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.568648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.568748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.568778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.568895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.568925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.569130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.569172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.569349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.569380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.569496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.569535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.569679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.569698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.569872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.569903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.570033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.570062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.570230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.570262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.570498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.570530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.570798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.570828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.570932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.570961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.571072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.571101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.571229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.571261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.571394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.571425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.571599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.571632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.571763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.571781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.571857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.571876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.572032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.572051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.572130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.572150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.572239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.572258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.572404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.572424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.572608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.572627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.572729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.572750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.572832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.572854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.573023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.573046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.573201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.573225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.573381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.573412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.573540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.935 [2024-12-10 00:15:03.573576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.935 qpair failed and we were unable to recover it. 00:33:28.935 [2024-12-10 00:15:03.573696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.573728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.573927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.573958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.574124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.574155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.574407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.574438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.574620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.574651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.574815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.574846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.574959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.574991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.575191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.575224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.575418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.575449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.575568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.575598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.575861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.575893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.576061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.576092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.576212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.576245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.576447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.576478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.576583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.576614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.576816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.576838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.576986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.577008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.577115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.577137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.577235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.577257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.577341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.577364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.577548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.577570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.577714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.577737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.577849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.577872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.578053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.578085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.578196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.578225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.578442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.578474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.578645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.578677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.578863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.578895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.579106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.579136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.579354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.579386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.579499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.579528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.579645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.579677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.579817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.579838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.579995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.580017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.580234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.580258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.580408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.580431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.580616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.580646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.580811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.580841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.936 [2024-12-10 00:15:03.581043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.936 [2024-12-10 00:15:03.581075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.936 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.581188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.581223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.581404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.581436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.581535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.581566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.581730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.581774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.581960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.581981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.582076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.582099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.582204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.582227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.582395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.582418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.582505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.582527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.582674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.582711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.582818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.582844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.582955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.582981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.583101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.583126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.583226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.583253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.583358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.583384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.583642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.583674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.583788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.583819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.583942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.583973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.584209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.584241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.584430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.584463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.584577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.584608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.584775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.584813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.584968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.584995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.585098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.585124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.585297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.585324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.585483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.585508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.585613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.585654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.585829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.585861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.585977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.586009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.586247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.586279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.586388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.586420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.586587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.586617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.586738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.586770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.586898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.586923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.587113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.587139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.587304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.587331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.587445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.587488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.937 [2024-12-10 00:15:03.587656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.937 [2024-12-10 00:15:03.587686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.937 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.587851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.587882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.587982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.588011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.588180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.588218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.588340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.588370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.588536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.588568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.588732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.588758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.588913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.588939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.589193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.589226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.589462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.589494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.589660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.589691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.589814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.589840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.589945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.589971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.590067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.590094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.590274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.590300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.590405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.590431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.590586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.590612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.590776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.590807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.590993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.591023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.591188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.591220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.591496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.591528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.591704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.591731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.591842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.591867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.591960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.591986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.592091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.592117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.592216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.592243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.592337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.592363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.592449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.592475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.592725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.592751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.592858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.592886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.593143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.593182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.593341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.593369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.593471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.593498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.593697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.593726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.593817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.593846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.594003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.594030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.594231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.594264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.594383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.594413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.594583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.938 [2024-12-10 00:15:03.594612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.938 qpair failed and we were unable to recover it. 00:33:28.938 [2024-12-10 00:15:03.594776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.594806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.595001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.595030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.595140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.595180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.595354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.595384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.595631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.595665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.595770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.595797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.595891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.595920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.596081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.596108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.596234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.596262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.596358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.596387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.596495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.596522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.596637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.596665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.596829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.596860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.597025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.597055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.597223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.597255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.597437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.597466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.597633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.597663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.597849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.597881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.598064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.598094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.598264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.598298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.598534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.598563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.598725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.598757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.598933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.598965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.599068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.599097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.599219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.599251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.599374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.599404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.599640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.599671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.599774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.599815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.600001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.600029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.600231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.600260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.600437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.600469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.600575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.600606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.600795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.939 [2024-12-10 00:15:03.600827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.939 qpair failed and we were unable to recover it. 00:33:28.939 [2024-12-10 00:15:03.601082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.601110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.601226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.601254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.601449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.601477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.601592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.601621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.601805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.601837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.602100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.602131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.602266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.602297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.602421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.602452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.602662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.602693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.602870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.602901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.603066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.603096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.603208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.603246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.603485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.603517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.603640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.603672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.603838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.603867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.604039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.604069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.604193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.604224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.604330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.604359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.604471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.604500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.604625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.604656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.604763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.604792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.604912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.604944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.605072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.605104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.605290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.605321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.605428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.605459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.605653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.605685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.605801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.605831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.606025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.606057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.606247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.606280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.606517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.606549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.606661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.606691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.606795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.606825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.606991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.607022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.607136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.607176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.607415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.607445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.607664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.607694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.607796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.607826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.607992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.608024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.940 [2024-12-10 00:15:03.608136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.940 [2024-12-10 00:15:03.608178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.940 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.608373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.608404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.608539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.608571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.608682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.608711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.608818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.608849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.608976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.609006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.609116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.609146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.609328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.609359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.609561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.609593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.609781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.609813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.609915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.609946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.610135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.610188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.610441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.610473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.610584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.610621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.610733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.610763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.610933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.610964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.611156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.611200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.611371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.611402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.611518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.611550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.611715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.611745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.611863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.611895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.612083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.612113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.612304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.612336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.612441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.612471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.612584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.612614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.612781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.612812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.612991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.613022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.613198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.613232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.613405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.613438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.613561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.613590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.613705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.613736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.613838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.613869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.613989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.614020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.614124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.614154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.614332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.614363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.614482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.614512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.614696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.941 [2024-12-10 00:15:03.614727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.941 qpair failed and we were unable to recover it. 00:33:28.941 [2024-12-10 00:15:03.614897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.614927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.615098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.615129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.615376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.615449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.615710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.615781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.615930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.615965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.616139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.616185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.616354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.616385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.616564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.616596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.616699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.616736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.616910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.616942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.617110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.617140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.617359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.617392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.617510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.617541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.617753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.617784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.617951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.617981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.618150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.618193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.618309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.618340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.618515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.618547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.618667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.618698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.618801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.618832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.618998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.619029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.619266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.619298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.619404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.619436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.619651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.619682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.619862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.619894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.620073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.620105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.620236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.620268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.620387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.620418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.620536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.620568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.620738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.620770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.620935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.620972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.621144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.621185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.621300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.621331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.621444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.621476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.621644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.621681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.621779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.621810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.621990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.622021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.622128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.622167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.622339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.622370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.942 qpair failed and we were unable to recover it. 00:33:28.942 [2024-12-10 00:15:03.622539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.942 [2024-12-10 00:15:03.622570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.622748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.622779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.623022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.623054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.623183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.623216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.623330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.623363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.623475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.623506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.623612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.623644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.623742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.623773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.623883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.623914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.624020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.624050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.624238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.624289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.624498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.624529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.624724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.624755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.624884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.624914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.625027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.625058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.625175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.625207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.625311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.625343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.625554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.625585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.625689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.625727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.625841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.625872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.625985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.626016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.626206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.626238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.626345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.626376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.626556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.626586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.626696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.626728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.626830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.626861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.626972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.627003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.627128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.627171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.627338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.627370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.627534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.627565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.627676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.627708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.627942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.627974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.628241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.628274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.628534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.628566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.628782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.943 [2024-12-10 00:15:03.628813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.943 qpair failed and we were unable to recover it. 00:33:28.943 [2024-12-10 00:15:03.628984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.629015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.629134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.629173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.629289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.629319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.629582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.629613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.629738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.629768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.629887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.629919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.630025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.630056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.630249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.630281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.630399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.630431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.630597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.630629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.630816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.630847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.631114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.631146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.631439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.631471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.631578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.631609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.631725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.631756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.631924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.631954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.632067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.632099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.632351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.632383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.632491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.632522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.632635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.632666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.632800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.632830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.633004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.633035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.633252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.633285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.633386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.633417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.633618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.633651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.633759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.633790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.633975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.634006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.634139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.634178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.634359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.634389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.634557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.634588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.634702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.634732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.634902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.634933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.635126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.635188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.635359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.635390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.635587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.635618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.635790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.635820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.635927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.635958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.636149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.636190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.636468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.944 [2024-12-10 00:15:03.636500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.944 qpair failed and we were unable to recover it. 00:33:28.944 [2024-12-10 00:15:03.636700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.636731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.636902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.636933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.637033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.637064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.637182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.637214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.637408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.637440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.637681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.637711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.637879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.637911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.638027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.638057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.638180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.638213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.638402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.638433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.638601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.638633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.638745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.638776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.638885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.638922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.639049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.639079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.639269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.639300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.639467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.639498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.639667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.639697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.639867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.639898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.640167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.640199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.640313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.640343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.640454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.640484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.640590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.640621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.640792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.640823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.641012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.641042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.641144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.641186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.641374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.641406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.641527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.641558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.641677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.641709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.641875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.641906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.642072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.642103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.642292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.642325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.642494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.642526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.642654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.642685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.642948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.642979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.643147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.643213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.643324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.643355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.643536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.643567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.643825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.643856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.643954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.945 [2024-12-10 00:15:03.643986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.945 qpair failed and we were unable to recover it. 00:33:28.945 [2024-12-10 00:15:03.644151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.644200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.644378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.644410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.644581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.644612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.644815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.644846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.645040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.645071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.645261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.645294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.645392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.645423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.645558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.645588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.645761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.645792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.645958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.645989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.646090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.646121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.646250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.646281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.646452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.646483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.646648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.646679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.646801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.646833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.646943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.646973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.647139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.647179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.647282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.647312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.647501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.647532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.647725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.647756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.647878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.647909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.648012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.648042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.648192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.648225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.648404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.648435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.648560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.648591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.648693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.648725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.648851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.648883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.649000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.649037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.649140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.649237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.649423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.649454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.649580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.649612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.649721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.649752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.649918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.649951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.650130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.650172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.650363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.650395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.650567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.650598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.650783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.650815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.651075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.651106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.651297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.651330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.946 [2024-12-10 00:15:03.651456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.946 [2024-12-10 00:15:03.651488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.946 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.651675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.651704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.651830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.651862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.652037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.652069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.652236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.652269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.652373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.652404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.652596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.652627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.652830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.652862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.652980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.653010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.653201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.653234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.653349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.653380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.653547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.653577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.653679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.653711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.653946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.653977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.654170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.654202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.654326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.654357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.654552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.654584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.654755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.654786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.654897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.654939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.655059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.655090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.655271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.655303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.655419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.655449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.655651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.655682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.655871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.655903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.656069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.656099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.656234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.656267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.656537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.656568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.656683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.656715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.656815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.656846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.657017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.657049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.657307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.657340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.657455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.657487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.657658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.657688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.657808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.657839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.657947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.657978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.658150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.658191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.658371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.658401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.658513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.658544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.658716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.658746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.658914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.947 [2024-12-10 00:15:03.658946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.947 qpair failed and we were unable to recover it. 00:33:28.947 [2024-12-10 00:15:03.659187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.659219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.659396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.659427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.659607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.659638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.659834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.659865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.660103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.660133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.660275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.660307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.660475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.660505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.660616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.660648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.660842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.660872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.661000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.661030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.661133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.661172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.661354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.661386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.661622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.661653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.661843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.661874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.661994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.662025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.662216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.662248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.662360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.662396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.662513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.662544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.662721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.662752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.662918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.662949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.663051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.663082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.663196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.663230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.663357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.663388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.663506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.663538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.663736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.663767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.663878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.663908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.664098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.664130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.664282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.664340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.664499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.664541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.664765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.664800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.664959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.664996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.665372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.665413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.665631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.665668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.665868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.665908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.666129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.666185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.666351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.948 [2024-12-10 00:15:03.666397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.948 qpair failed and we were unable to recover it. 00:33:28.948 [2024-12-10 00:15:03.666698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.666732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.666916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.666950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.667072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.667104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.667316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.667348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.667457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.667489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.667591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.667621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.667724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.667756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.667876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.667913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.668016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.668047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.668179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.668211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.668387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.668418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.668522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.668554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.668673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.668704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.668869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.668900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.669081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.669113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.669294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.669327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.669495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.669526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.669696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.669728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.669905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.669943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.670208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.670241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.670484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.670515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.670633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.670664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.670775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.670806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.670906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.670937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.671057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.671088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.671267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.671299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.671421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.671453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.671565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.671596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.671763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.671794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.671998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.672028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.672227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.672258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.672425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.672457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.672568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.672598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.672713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.672744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.672872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.672908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.673032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.673062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.673257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.673291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.673396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.673427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.673546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.949 [2024-12-10 00:15:03.673578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.949 qpair failed and we were unable to recover it. 00:33:28.949 [2024-12-10 00:15:03.673743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.673773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.673943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.673974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.674180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.674213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.674348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.674378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.674482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.674514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.674625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.674656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.674825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.674857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.675022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.675052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.675177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.675209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.675455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.675486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.675598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.675629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.675729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.675760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.675950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.675982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.676095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.676125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.676314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.676348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.676609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.676640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.676807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.676838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.676956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.676986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.677098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.677130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.677383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.677453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.677650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.677684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.677860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.677891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.678061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.678109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.678292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.678326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.678515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.678546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.678734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.678765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.678879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.678911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.679031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.679062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.679231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.679264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.679391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.679422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.679524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.679555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.679671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.679702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.679804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.679834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.680001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.680031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.680200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.680232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.680401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.680432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.680549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.680581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.680766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.680797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.680900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.680931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.950 qpair failed and we were unable to recover it. 00:33:28.950 [2024-12-10 00:15:03.681093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.950 [2024-12-10 00:15:03.681124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.681234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.681266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.681551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.681582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.681761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.681793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.681892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.681923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.682090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.682121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.682258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.682291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.682549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.682580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.682750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.682782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.682898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.682930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.683140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.683183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.683304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.683336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.683501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.683532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.683789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.683820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.683995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.684025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.684195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.684226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.684327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.684358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.684471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.684502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.684696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.684727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.684842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.684872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.685067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.685098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.685354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.685386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.685576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.685607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.685728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.685758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.685872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.685904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.686072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.686102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.686207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.686239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.686402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.686433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.686615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.686647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.686841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.686871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.686974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.687003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.687109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.687140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.687320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.687351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.687519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.687549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.687715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.687747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.688005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.688035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.688225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.688256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.688371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.688408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.688530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.951 [2024-12-10 00:15:03.688560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.951 qpair failed and we were unable to recover it. 00:33:28.951 [2024-12-10 00:15:03.688661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.688692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.688952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.688983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.689152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.689192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.689404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.689435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.689613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.689645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.689759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.689790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.689906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.689937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.690102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.690132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.690307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.690339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.690527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.690557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.690674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.690704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.690820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.690851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.690960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.690992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.691095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.691124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.691340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.691374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.691562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.691592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.691789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.691819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.691995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.692026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.692226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.692258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.692371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.692400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.692581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.692612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.692719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.692749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.692854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.692886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.693000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.693030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.693268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.693301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.693508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.693545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.693730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.693761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.693934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.693965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.694078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.694109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.694283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.694315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.694431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.694462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.694580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.694611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.694778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.952 [2024-12-10 00:15:03.694810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.952 qpair failed and we were unable to recover it. 00:33:28.952 [2024-12-10 00:15:03.694910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.694939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.695049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.695081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.695282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.695314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.695480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.695511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.695674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.695705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.695869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.695900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.696035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.696067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.696248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.696279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.696483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.696515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.696681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.696712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.696840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.696871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.697043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.697074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.697245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.697277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.697467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.697498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.697603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.697634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.697742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.697772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.697903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.697933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.698043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.698074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.698240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.698272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.698401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.698430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.698536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.698567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.698660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.698690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.698795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.698825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.699016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.699047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.699175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.699207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.699470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.699501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.699625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.699657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.699824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.699855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.700049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.700079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.700197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.700230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.700341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.700373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.700481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.700512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.700747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.700778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.700977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.701008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.701197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.701229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.701349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.701379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.701496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.701527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.701723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.701753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.701967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.701997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.953 [2024-12-10 00:15:03.702169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.953 [2024-12-10 00:15:03.702202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.953 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.702392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.702422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.702560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.702591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.702776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.702807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.702929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.702960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.703113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.703144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.703253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.703283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.703450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.703482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.703653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.703684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.703851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.703882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.703994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.704030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.704149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.704188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.704293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.704322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.704529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.704560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.704820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.704850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.704950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.704981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.705178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.705210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.705392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.705423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.705533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.705563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.705662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.705692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.705900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.705932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.706102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.706138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.706262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.706294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.706454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.706485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.706685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.706715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.706886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.706917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.707172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.707205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.707442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.707472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.707730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.707761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.707927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.707958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.708123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.708154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.708358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.708390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.708503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.708534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.708636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.708667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.708778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.708809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.708986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.709017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.709204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.709237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.709352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.709384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.709502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.709532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.954 [2024-12-10 00:15:03.709699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.954 [2024-12-10 00:15:03.709730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.954 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.709895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.709926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.710053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.710084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.710322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.710354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.710574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.710605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.710708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.710738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.710943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.710974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.711140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.711180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.711308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.711340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.711595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.711631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.711821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.711851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.712017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.712048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.712235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.712279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.712457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.712490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.712595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.712625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.712758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.712790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.712890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.712921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.713173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.713206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.713374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.713406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.713600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.713631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.713804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.713835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.714005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.714037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.714169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.714201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.714394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.714425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.714613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.714643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.714744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.714775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.714957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.714987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.715154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.715196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.715344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.715380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.715528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.715559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.715793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.715823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.715946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.715978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.716152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.716212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.716392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.716423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.716539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.716570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.716740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.716771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.716886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.716923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.717114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.717147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.717366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.717397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.717514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.955 [2024-12-10 00:15:03.717545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.955 qpair failed and we were unable to recover it. 00:33:28.955 [2024-12-10 00:15:03.717781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.717811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.717912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.717944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.718064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.718095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.718278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.718311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.718434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.718464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.718630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.718661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.718831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.718862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.718977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.719009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.719126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.719165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.719361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.719392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.719625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.719695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.719960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.719995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.720114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.720146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.720298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.720330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.720490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.720520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.720702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.720734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.720905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.720936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.721130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.721175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.721291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.721321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.721506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.721537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.721724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.721756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.721953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.721983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.722093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.722123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.722237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.722279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.722383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.722413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.722607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.722639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.722760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.722792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.722901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.722930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.723174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.723207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.723399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.723430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.723533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.723564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.723683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.723713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.723881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.723912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.724029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.724060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.724319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.724352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.724455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.956 [2024-12-10 00:15:03.724483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.956 qpair failed and we were unable to recover it. 00:33:28.956 [2024-12-10 00:15:03.724598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.724628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.724733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.724763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.724868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.724896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.725007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.725037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.725144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.725186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.725355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.725387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.725488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.725518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.725693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.725723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.725914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.725946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.726056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.726088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.726205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.726237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.726450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.726482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.726606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.726637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.726810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.726841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.726962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.726994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.727095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.727126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.727305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.727340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.727510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.727542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.727639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.727670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.727875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.727907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.728181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.728214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.728356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.728390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.728511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.728542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.728657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.728688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.728896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.728927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.729058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.729089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.729281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.729312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.729426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.729457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.729630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.729661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.729827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.729858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.730045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.730076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.730314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.730347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.730453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.730483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.730721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.730752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.730920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.730952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.957 qpair failed and we were unable to recover it. 00:33:28.957 [2024-12-10 00:15:03.731143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.957 [2024-12-10 00:15:03.731183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.731355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.731387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.731553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.731584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.731705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.731736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.731851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.731881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.731996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.732028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.732207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.732239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.732413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.732444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.732554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.732585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.732758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.732789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.732961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.732991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.733186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.733218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.733341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.733373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.733545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.733576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.733688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.733719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.733843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.733873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.733983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.734021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.734135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.734176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.734342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.734373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.734633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.734664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.734790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.734822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.735078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.735108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.735226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.735257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.735445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.735476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.735662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.735694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.735875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.735906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.736072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.736104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.736273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.736307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.736435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.736467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.736582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.736612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.736779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.736811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.736978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.737008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.737131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.737173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.737297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.737334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.737540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.737571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.737682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.737713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.737883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.958 [2024-12-10 00:15:03.737915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.958 qpair failed and we were unable to recover it. 00:33:28.958 [2024-12-10 00:15:03.738079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.738109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.738333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.738366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.738542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.738572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.738691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.738722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.738887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.738919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.739047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.739078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.739249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.739282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.739542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.739573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.739740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.739772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.739963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.739994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.740180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.740214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.740396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.740427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.740529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.740559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.740724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.740755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.740869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.740900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.741095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.741125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.741254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.741286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.741476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.741506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.741699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.741730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.741843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.741873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.742077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.742109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.742285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.742316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.742505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.742537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.742652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.742689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.742814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.742845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.743022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.743053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.743174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.743207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.743377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.743407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.743576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.743607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.743798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.743829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.743996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.744027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.744263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.744297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.744399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.744428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.744525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.744556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.744815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.744846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.744961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.744992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.745113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.745145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.745399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.745430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.745597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.959 [2024-12-10 00:15:03.745629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.959 qpair failed and we were unable to recover it. 00:33:28.959 [2024-12-10 00:15:03.745809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.745839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.745948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.745980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.746167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.746199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.746304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.746335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.746450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.746481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.746581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.746611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.746776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.746807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.746920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.746951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.747117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.747148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.747409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.747441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.747684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.747714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.747910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.747942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.748116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.748147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.748354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.748386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.748490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.748521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.748643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.748675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.748788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.748819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.749009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.749040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.749207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.749239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.749434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.749465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.749635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.749666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.749768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.749800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.750034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.750065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.750248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.750279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.750398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.750429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.750620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.750652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.750752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.750783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.750891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.750921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.751086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.751117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.751288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.751320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.751486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.751524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.751657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.751688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.751855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.751886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.752122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.752152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.960 [2024-12-10 00:15:03.752320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.960 [2024-12-10 00:15:03.752351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.960 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.752451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.752482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.752592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.752622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.752752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.752783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.753040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.753070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.753319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.753351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.753588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.753620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.753730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.753761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.753928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.753959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.754111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.754141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.754323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.754355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.754521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.754550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.754663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.754694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.754896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.754927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.755043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.755076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.755192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.755227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.755394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.755426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.755597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.755627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.755813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.755851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.755970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.756000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.756113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.756144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.756265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.756296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.756475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.756506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.756654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.756685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.756854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.756886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.757142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.757183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.757293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.757324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.757496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.757527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.757716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.757747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.757850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.757880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.757999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.758031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.758279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.758311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.758500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.758532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.758730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.961 [2024-12-10 00:15:03.758761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.961 qpair failed and we were unable to recover it. 00:33:28.961 [2024-12-10 00:15:03.758863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.758894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.759064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.759094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.759208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.759241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.759342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.759371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.759473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.759503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.759617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.759647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.759777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.759808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.759917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.759948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.760192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.760224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.760392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.760430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.760596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.760626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.760793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.760830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.761000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.761030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.761199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.761231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.761431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.761462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.761629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.761659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.761922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.761953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.762122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.762152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.762339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.762370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.762544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.762574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.762704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.762735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.762856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.762887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.762991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.763022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.763146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.763188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.763356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.763388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.763580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.763611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.763726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.763757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.764017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.764048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.764220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.764253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.764422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.764454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.764644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.764675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.764841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.764873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.764985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.765015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.765208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.765241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.765376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.765408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.765507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.765537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.962 qpair failed and we were unable to recover it. 00:33:28.962 [2024-12-10 00:15:03.765703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.962 [2024-12-10 00:15:03.765734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.765842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.765872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.766034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.766071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.766239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.766270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.766398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.766429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.766559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.766589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.766693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.766724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.766890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.766920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.767129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.767169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.767275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.767306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.767474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.767506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.767673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.767704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.767821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.767852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.767971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.768001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.768115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.768146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.768419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.768450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.768661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.768718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.768923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.768963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.769185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.769224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.769411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.769447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.769654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.769690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.769827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.769868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.770005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.770048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.770193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.770236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.770366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.770407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.770621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.770658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.770809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.770849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.771059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.771094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.771238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.771281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.771466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.771509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.771720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.771756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.772011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.772047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.772257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.772294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.772430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.772471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.772658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.772692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.772899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.772934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.773177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.773214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.963 qpair failed and we were unable to recover it. 00:33:28.963 [2024-12-10 00:15:03.773360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.963 [2024-12-10 00:15:03.773402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.773542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.773588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.773776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.773811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.774051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.774087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.774294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.774330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.774473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.774516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.774766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.774799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.775048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.775100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.775320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.775362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.775560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.775600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.775811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.775849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.776066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.776098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.776229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.776272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.776391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.776429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.776636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.776668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.776848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.776880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.777023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.777056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.777186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.777224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.777416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.777448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.777614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.777684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.777879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.777914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.778026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.778059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.778249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.778282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.778548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.778581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.778749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.778780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.779045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.779075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.779246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.779278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.779398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.779429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.779611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.779642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.779769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.779800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.779919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.779950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.780135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.780177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.780299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.780330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.780444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.780475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.780663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.780694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.780798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.780828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.781010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.781040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.781227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.781260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.781501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.781532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.964 qpair failed and we were unable to recover it. 00:33:28.964 [2024-12-10 00:15:03.781658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.964 [2024-12-10 00:15:03.781689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.781790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.781821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.781931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.781962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.782148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.782189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.782309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.782340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.782514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.782546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.782726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.782757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.782994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.783031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.783202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.783234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.783339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.783369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.783533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.783564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.783738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.783769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.783900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.783930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.784034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.784065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.784298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.784329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.784501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.784534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.784700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.784730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.784831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.784861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.785038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.785068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.785225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.785258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.785359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.785390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.785516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.785547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.785663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.785694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.785804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.785835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.786006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.786037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.786256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.786288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.786472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.786503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.786740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.786771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.786940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.786971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.787082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.787113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.787314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.787346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.787454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.787485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.787736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.787767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.787893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.787924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.788028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.788065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.788291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.788323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.788434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.788466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.788594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.788624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.788794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.788825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.965 [2024-12-10 00:15:03.788995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.965 [2024-12-10 00:15:03.789025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.965 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.789198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.789231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.789428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.789459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.789654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.789685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.789800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.789832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.789951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.789982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.790175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.790208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.790315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.790346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.790535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.790566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.790756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.790788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.790968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.791000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.791115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.791146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.791269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.791300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.791466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.791497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.791663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.791694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.791977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.792008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.792138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.792178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.792293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.792324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.792429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.792460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.792627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.792658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.792829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.792860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.793031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.793061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.793226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.793265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.793392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.793423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.793589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.793620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.793822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.793854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.794123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.794153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.794330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.794361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.794530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.794560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.794662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.794694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.794797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.794828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.795005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.795036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.795205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.795237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.795429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.795460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.795626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.795657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.795775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.966 [2024-12-10 00:15:03.795811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.966 qpair failed and we were unable to recover it. 00:33:28.966 [2024-12-10 00:15:03.796078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.796109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.796324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.796356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.796528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.796558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.796689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.796720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.796828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.796859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.796970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.797001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.797191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.797224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.797337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.797370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.797560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.797590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.797764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.797795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.797966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.797996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.798207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.798239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.798502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.798533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.798732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.798763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.798881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.798912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.799080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.799110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.799284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.799317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.799483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.799514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.799687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.799718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.799891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.799921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.800085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.800116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.800296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.800328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.800440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.800470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.800727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.800759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.800961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.800992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.801209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.801243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.801412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.801443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.801619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.801650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.801763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.801795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.801964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.801995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.802167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.802200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.802317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.802348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.802460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.802491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.802596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.802627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.802731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.802762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.802864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.802895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.803061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.803092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.803352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.803384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.803498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.967 [2024-12-10 00:15:03.803529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.967 qpair failed and we were unable to recover it. 00:33:28.967 [2024-12-10 00:15:03.803641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.803671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.803853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.803885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.804008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.804039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.804207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.804239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.804363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.804395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.804652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.804683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.804850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.804881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.805068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.805099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.805216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.805248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.805417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.805448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.805553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.805584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.805754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.805785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.805989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.806021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.806139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.806181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.806350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.806382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.806509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.806545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.806648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.806680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.806891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.806922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.807181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.807213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.807320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.807351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.807541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.807573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.807742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.807773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.807894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.807925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.808036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.808066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.808239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.808272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.808454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.808485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.808653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.808684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.808820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.808852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.809018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.809049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.809245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.809278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.809448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.809479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.809646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.809677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.809848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.809880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.810140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.810179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.810359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.810392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.810627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.810658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.810829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.810860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.810960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.810991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.811169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.811201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.968 [2024-12-10 00:15:03.811460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.968 [2024-12-10 00:15:03.811491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.968 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.811605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.811637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.811761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.811792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.811891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.811929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.812053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.812084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.812275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.812307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.812479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.812510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.812611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.812642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.812744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.812775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.812895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.812926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.813042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.813074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.813256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.813287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.813390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.813422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.813528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.813560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.813675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.813706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.813817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.813848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.813999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.814032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.814203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.814235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.814347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.814378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.814587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.814618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.814730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.814762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.814873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.814903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.815103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.815135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.815267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.815299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.815501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.815532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.815699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.815729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.815849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.815881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.816053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.816083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.816185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.816218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.816318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.816348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.816604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.816647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.816820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.816851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.817019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.817050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.817197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.817229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.817345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.817375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.817478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.817509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.817700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.817732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.817844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.817875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.817977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.818007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.969 qpair failed and we were unable to recover it. 00:33:28.969 [2024-12-10 00:15:03.818182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.969 [2024-12-10 00:15:03.818215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.818397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.818428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.818610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.818640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.818899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.818931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.819098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.819128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.819324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d7b20 is same with the state(6) to be set 00:33:28.970 [2024-12-10 00:15:03.819641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.819711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.819912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.819947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.820211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.820244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.820351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.820383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.820498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.820528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.820654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.820685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.820800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.820831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.820997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.821028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.821227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.821259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.821365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.821397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.821497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.821529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.821700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.821730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.821916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.821947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.822124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.822166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.822356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.822387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.822623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.822654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.822822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.822854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.822974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.823004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.823184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.823215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.823318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.823347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.823466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.823496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.823663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.823693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.823881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.823911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.824022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.824053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.824183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.824215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.824317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.824347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.824507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.824544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.824786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.824817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.825011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.825041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.825210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.825241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.970 qpair failed and we were unable to recover it. 00:33:28.970 [2024-12-10 00:15:03.825408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.970 [2024-12-10 00:15:03.825437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.825616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.825647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.825829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.825860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.826024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.826054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.826178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.826209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.826383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.826414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.826514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.826544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.826725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.826755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.827014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.827045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.827217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.827249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.827422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.827454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.827556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.827586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.827751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.827780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.827985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.828017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.828131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.828171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.828297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.828328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.828430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.828461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.828663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.828694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.828862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.828891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.829155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.829199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.829301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.829332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.829533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.829564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.829727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.829758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.829952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.829983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.830091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.830122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.830298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.830331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.830459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.830490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.830655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.830687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.830808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.830839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.831021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.831052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.831220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.831251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.831420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.831452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.831619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.831650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.831753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.831784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.831955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.831986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.832176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.971 [2024-12-10 00:15:03.832209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.971 qpair failed and we were unable to recover it. 00:33:28.971 [2024-12-10 00:15:03.832412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.832451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.832619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.832649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.832816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.832846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.833011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.833042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.833249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.833282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.833453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.833486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.833658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.833690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.833881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.833912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.834168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.834202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.834314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.834345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.834513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.834557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.834674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.834705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.834816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.834846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.834963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.834995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.835228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.835260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.835498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.835529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.835696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.835730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.835992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.836023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.836140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.836181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.836299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.836330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.836523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.836554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.836672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.836701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.836885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.836916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.837113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.837144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.837265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.837297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.837468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.837498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.837680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.837710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.837898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.837929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.838099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.838129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.838300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.838358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.838544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.838615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.838747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.838781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.838890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.838922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.839133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.839174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.839291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.839322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.839443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.839472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.839576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.839606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.839799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.839830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.840011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.840042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.972 qpair failed and we were unable to recover it. 00:33:28.972 [2024-12-10 00:15:03.840232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.972 [2024-12-10 00:15:03.840265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.973 qpair failed and we were unable to recover it. 00:33:28.973 [2024-12-10 00:15:03.840436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:28.973 [2024-12-10 00:15:03.840477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:28.973 qpair failed and we were unable to recover it. 00:33:29.313 [2024-12-10 00:15:03.840665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.313 [2024-12-10 00:15:03.840695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.313 qpair failed and we were unable to recover it. 00:33:29.313 [2024-12-10 00:15:03.840863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.313 [2024-12-10 00:15:03.840894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.313 qpair failed and we were unable to recover it. 00:33:29.313 [2024-12-10 00:15:03.841079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.313 [2024-12-10 00:15:03.841110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.313 qpair failed and we were unable to recover it. 00:33:29.313 [2024-12-10 00:15:03.841323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.313 [2024-12-10 00:15:03.841356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.313 qpair failed and we were unable to recover it. 00:33:29.313 [2024-12-10 00:15:03.841597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.313 [2024-12-10 00:15:03.841628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.313 qpair failed and we were unable to recover it. 00:33:29.313 [2024-12-10 00:15:03.841738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.313 [2024-12-10 00:15:03.841769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.313 qpair failed and we were unable to recover it. 00:33:29.313 [2024-12-10 00:15:03.841881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.313 [2024-12-10 00:15:03.841911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.313 qpair failed and we were unable to recover it. 00:33:29.313 [2024-12-10 00:15:03.842030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.313 [2024-12-10 00:15:03.842074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.842218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.842261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.842372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.842404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.842509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.842539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.842664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.842696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.842881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.842913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.843046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.843084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.843216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.843247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.843350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.843378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.843570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.843602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.843860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.843891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.844078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.844109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.844309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.844342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.844458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.844489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.844674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.844704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.844873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.844904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.845021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.845051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.845228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.845259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.845373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.845403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.845614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.845654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.845783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.845814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.846099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.846131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.846315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.846345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.846463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.846492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.846599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.314 [2024-12-10 00:15:03.846630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.314 qpair failed and we were unable to recover it. 00:33:29.314 [2024-12-10 00:15:03.846753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.846783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.846884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.846914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.847032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.847064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.847191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.847222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.847392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.847422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.847599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.847644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.847768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.847799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.847986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.848016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.848166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.848198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.848320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.848353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.848466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.848496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.848603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.848634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.848802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.848833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.848999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.849036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.849206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.849239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.849358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.849390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.849494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.849524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.849635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.849665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.849793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.849823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.849927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.849956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.850074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.850106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.850266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.850321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.850505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.850573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.850701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.850743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.850939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.315 [2024-12-10 00:15:03.850971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.315 qpair failed and we were unable to recover it. 00:33:29.315 [2024-12-10 00:15:03.851149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.851192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.851382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.851415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.851526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.851558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.851675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.851706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.851874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.851906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.852010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.852042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.852224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.852257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.852385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.852417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.852539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.852571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.852682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.852713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.852977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.853009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.853177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.853209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.853411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.853444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.853573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.853605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.853772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.853805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.853924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.853956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.854061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.854093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.854258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.854293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.854480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.854512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.854690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.854722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.854896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.854927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.855032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.855071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.855242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.855275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.316 qpair failed and we were unable to recover it. 00:33:29.316 [2024-12-10 00:15:03.855496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.316 [2024-12-10 00:15:03.855558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.320 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.855759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.855791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.855980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.856012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.856181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.856214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.856418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.856456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.856629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.856659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.856829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.856859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.856992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.857024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.857136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.857176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.857300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.857330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.857434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.857466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.857590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.857620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.857786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.857817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.857984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.858023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.858194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.858227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.858336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.858367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.858498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.858528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.858702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.858732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.858839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.858870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.858972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.859002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.859219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.859251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.859351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.859382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.859551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.859581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.859775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.859805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.859928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.859958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.860153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.860198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.860316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.860346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.860523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.860553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.321 [2024-12-10 00:15:03.860667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.321 [2024-12-10 00:15:03.860698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.321 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.860865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.860898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.861084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.861114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.861334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.861366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.861498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.861529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.861640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.861671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.861840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.861871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.862038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.862068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.862261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.862295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.862415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.862447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.862553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.862583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.862693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.862725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.862837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.862868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.863048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.863079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.863277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.863310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.863478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.863509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.863677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.863707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.863877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.863907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.864036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.864068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.864242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.864275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.864389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.864418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.864542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.864573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.864696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.864728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.864918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.864949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.865206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.865239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.865369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.865400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.865600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.865631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.865814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.865845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.322 [2024-12-10 00:15:03.865949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.322 [2024-12-10 00:15:03.865981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.322 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.866093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.866125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.866357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.866390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.866571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.866602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.866718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.866750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.866961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.866992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.867204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.867236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.867361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.867392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.867565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.867597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.867784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.867815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.867983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.868014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.868227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.868260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.868394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.868425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.868614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.868645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.868840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.868871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.869039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.869070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.869257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.869289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.869460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.869492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.869732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.869763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.869882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.869914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.870208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.870241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.870449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.870480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.870603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.870634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.870805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.870837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.871033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.871069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.871235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.871268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.871393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.323 [2024-12-10 00:15:03.871425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.323 qpair failed and we were unable to recover it. 00:33:29.323 [2024-12-10 00:15:03.871540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.871571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.871688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.871720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.871888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.871920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.872040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.872071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.872277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.872309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.872477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.872507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.872606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.872638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.872758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.872789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.872961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.872992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.873107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.873139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.873251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.873283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.873421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.873454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.873660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.873692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.873886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.873917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.874025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.874057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.874168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.874201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.874414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.874446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.324 [2024-12-10 00:15:03.874563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.324 [2024-12-10 00:15:03.874594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.324 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.874763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.874795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.874896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.874928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.875131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.875175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.875367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.875398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.875521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.875553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.875683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.875714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.875824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.875855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.875958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.875990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.876168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.876201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.876402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.876434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.876622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.876653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.876821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.876853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.877025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.877057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.877191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.877223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.877417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.877449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.877551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.877582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.877696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.877727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.877847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.877879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.877998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.878029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.878137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.878188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.878294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.878325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.325 qpair failed and we were unable to recover it. 00:33:29.325 [2024-12-10 00:15:03.878493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.325 [2024-12-10 00:15:03.878524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.878711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.878743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.878849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.878881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.878984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.879015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.879189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.879222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.879410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.879441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.879649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.879679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.879789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.879821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.879936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.879968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.880080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.880111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.880290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.880323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.880427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.880458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.880643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.880675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.880845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.880877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.881057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.881088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.881200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.881231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.881334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.881365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.881538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.881569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.881736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.881767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.881873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.881904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.882004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.882035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.882217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.882249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.326 [2024-12-10 00:15:03.882423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.326 [2024-12-10 00:15:03.882455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.326 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.882566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.882597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.882778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.882809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.882985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.883017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.883119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.883150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.883359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.883392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.883582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.883614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.883821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.883852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.884025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.884057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.884223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.884257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.884475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.884506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.884623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.884654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.884764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.884796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.884899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.884930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.885109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.885141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.885345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.885378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.885577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.885615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.885733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.885764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.885965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.885996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.886096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.886127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.886310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.886345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.886456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.886488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.886591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.886622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.886718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.886749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.886952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.886983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.887153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.887193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.887387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.887419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.887588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.887619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.887783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.887814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.327 [2024-12-10 00:15:03.887973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.327 [2024-12-10 00:15:03.888004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.327 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.888191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.888224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.888336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.888367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.888536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.888566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.888680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.888712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.888887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.888918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.889100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.889131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.889253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.889284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.889450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.889482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.889583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.889613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.889808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.889840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.890033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.890065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.890175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.890207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.890398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.890429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.890631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.890662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.890781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.890812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.890930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.890961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.891058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.891089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.891275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.891307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.891474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.891505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.891615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.891647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.891816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.891848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.328 qpair failed and we were unable to recover it. 00:33:29.328 [2024-12-10 00:15:03.892031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.328 [2024-12-10 00:15:03.892063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.892268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.892300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.892426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.892458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.892561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.892592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.892770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.892801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.892971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.893008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.893201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.893233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.893357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.893388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.893584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.893615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.893723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.893754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.893868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.893898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.894001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.894033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.894233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.894265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.894456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.894487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.894585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.894616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.894797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.894828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.895018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.895048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.895238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.895269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.895381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.895413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.895519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.895549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.895649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.895680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.895847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.895878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.896043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.896075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.896178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.329 [2024-12-10 00:15:03.896210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.329 qpair failed and we were unable to recover it. 00:33:29.329 [2024-12-10 00:15:03.896379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.896410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.896576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.896607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.896706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.896736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.896900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.896932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.897083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.897114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.897268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.897301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.897421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.897452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.897575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.897606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.897786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.897817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.898009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.898040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.898228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.898260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.898365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.898396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.898510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.898540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.898668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.898699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.898881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.898911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.899116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.899146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.899328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.899359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.899464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.899495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.899615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.899646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.899756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.899787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.899887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.899919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.900117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.900154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.330 qpair failed and we were unable to recover it. 00:33:29.330 [2024-12-10 00:15:03.900341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.330 [2024-12-10 00:15:03.900372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.900536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.900567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.900748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.900779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.900949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.900980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.901179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.901212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.901404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.901435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.901603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.901633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.901815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.901847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.901950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.901981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.902118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.902150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.902258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.902289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.902454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.902485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.902584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.902615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.902744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.902775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.902956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.902987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.903104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.903135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.903255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.903287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.903400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.331 [2024-12-10 00:15:03.903432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.331 qpair failed and we were unable to recover it. 00:33:29.331 [2024-12-10 00:15:03.903556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.903587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.903709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.903740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.903932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.903962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.904133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.904175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.904344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.904375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.904506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.904538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.904659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.904690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.904802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.904833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.905003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.905034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.905140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.905204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.905315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.905346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.905542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.905575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.905691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.905722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.905896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.905927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.332 [2024-12-10 00:15:03.906028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.332 [2024-12-10 00:15:03.906059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.332 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.906181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.906213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.906382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.906413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.906581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.906612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.906715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.906747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.906915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.906946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.907079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.907110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.907304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.907342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.907537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.907568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.907685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.907716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.907886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.907918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.908030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.908060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.908226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.908257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.908387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.908418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.333 [2024-12-10 00:15:03.908538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.333 [2024-12-10 00:15:03.908569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.333 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.908669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.908700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.908824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.908856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.909030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.909061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.909170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.909202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.909342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.909373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.909491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.909522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.909638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.909670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.909777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.909807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.909922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.909954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.910074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.910105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.910294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.910326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.910447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.910478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.910622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.910693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.910911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.910948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.911124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.911156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.911295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.911328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.911496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.911528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.911704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.911735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.334 [2024-12-10 00:15:03.911969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.334 [2024-12-10 00:15:03.912000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.334 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.912117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.912149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.912262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.912294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.912460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.912491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.912602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.912632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.912799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.912830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.913038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.913069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.913178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.913211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.913330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.913361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.913548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.913579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.913745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.913776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.913879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.913910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.914076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.914107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.914232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.914265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.914367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.914413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.914673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.914705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.914820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.914851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.915059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.915090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.915259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.915292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.915484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.915515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.335 [2024-12-10 00:15:03.915707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.335 [2024-12-10 00:15:03.915739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.335 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.915887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.915919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.916028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.916059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.916249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.916282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.916449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.916480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.916710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.916742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.916914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.916944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.917055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.917087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.917259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.917292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.917402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.917434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.917645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.917676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.917786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.917817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.917917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.917948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.918061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.918092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.918285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.918317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.918541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.918573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.918766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.918797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.918971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.919002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.919199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.919231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.919338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.919370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.919481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.919512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.919636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.919671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.919786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.919817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.920028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.920059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.920317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.920349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.920586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.920617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.920731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.920761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.920928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.920959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.921129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.921169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.921339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.921369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.921553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.921584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.921777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.921808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.921909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.336 [2024-12-10 00:15:03.921940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.336 qpair failed and we were unable to recover it. 00:33:29.336 [2024-12-10 00:15:03.922047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.922078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.922268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.922306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.922505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.922536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.922752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.922784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.923044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.923076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.923362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.923399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.923505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.923535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.923715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.923746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.923982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.924013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.924129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.924169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.924356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.924388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.924573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.924605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.924771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.924802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.924909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.924940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.925106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.925138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.925293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.925325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.925501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.925533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.925716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.925748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.925918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.925949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.926086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.926117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.926246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.926279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.926381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.926412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.926581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.926612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.926783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.926814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.927076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.927108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.927286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.927319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.927492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.337 [2024-12-10 00:15:03.927523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.337 qpair failed and we were unable to recover it. 00:33:29.337 [2024-12-10 00:15:03.927759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.927791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.927991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.928023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.928219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.928251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.928435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.928466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.928634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.928666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.928779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.928809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.928988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.929020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.929133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.929170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.929366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.929397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.929566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.929597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.929769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.929799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.929993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.930024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.930126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.930165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.930373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.930405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.930572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.930609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.930790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.930821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.931030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.931062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.931227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.931259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.931458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.931489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.931675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.931706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.931875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.931906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.932033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.932064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.932255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.932288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.932395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.932426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.932683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.932714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.932882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.932914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.933081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.933113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.338 [2024-12-10 00:15:03.933334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.338 [2024-12-10 00:15:03.933367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.338 qpair failed and we were unable to recover it. 00:33:29.339 [2024-12-10 00:15:03.933472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-12-10 00:15:03.933503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-12-10 00:15:03.933615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-12-10 00:15:03.933647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-12-10 00:15:03.933913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-12-10 00:15:03.933944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-12-10 00:15:03.934050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-12-10 00:15:03.934081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-12-10 00:15:03.934183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-12-10 00:15:03.934216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-12-10 00:15:03.934473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-12-10 00:15:03.934504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-12-10 00:15:03.934621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-12-10 00:15:03.934661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.339 [2024-12-10 00:15:03.934922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.339 [2024-12-10 00:15:03.934953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.339 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.935071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.935103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.935363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.935394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.935587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.935619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.935730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.935761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.935981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.936012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.936282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.936335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.936521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.936551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.936747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.936778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.937041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.937072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.937322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.937354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.937470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.937501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.937690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.937720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.937891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.340 [2024-12-10 00:15:03.937922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.340 qpair failed and we were unable to recover it. 00:33:29.340 [2024-12-10 00:15:03.938114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.938145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.938424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.938456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.938643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.938675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.938935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.938965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.939138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.939190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.939375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.939413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.939608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.939640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.939808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.939839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.940006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.940037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.940148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.940188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.940385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.940415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.940669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.940700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.940890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.940921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.941132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.941172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.941347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.941378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.941508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.941539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.941729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.941760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.941961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.941991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.942178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.942210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.942453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.942484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.942765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.942796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.942982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.943013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.943248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.943281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.943396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.943427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.943536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.943566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.943733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.943763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.943887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.943918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.944113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.944145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.944336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.944368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.944671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.944703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.944871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.944902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.945095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.945126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.945426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.341 [2024-12-10 00:15:03.945460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.341 qpair failed and we were unable to recover it. 00:33:29.341 [2024-12-10 00:15:03.945593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.945624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.945795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.945826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.946023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.946054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.946221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.946253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.946515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.946547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.946739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.946770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.946952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.946984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.947244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.947276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.947442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.947474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.947639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.947670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.947931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.947963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.948166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.948197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.948393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.948435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.948722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.948754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.948933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.948964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.949065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.949096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.949213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.949245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.949483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.949514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.949792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.949823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.949992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.950024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.950204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.950237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.950355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.950386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.950553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.950583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.950759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.950791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.951091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.951122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.951391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.951423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.951541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.951572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.951810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.951842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.951943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.951972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.952151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.952193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.952380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.952412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.952520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.952550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.952767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.952798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.953034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.953065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.953384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.953416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.953538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.953569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.953686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.953717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.953913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.953945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.954114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.954145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.954327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.954358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.954639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.954671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.954840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.954871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.955064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.342 [2024-12-10 00:15:03.955095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.342 qpair failed and we were unable to recover it. 00:33:29.342 [2024-12-10 00:15:03.955243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.955275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.955467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.955498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.955603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.955635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.955851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.955882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.956006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.956037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.956234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.956266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.956446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.956477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.956658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.956689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.956951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.956982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.957150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.957199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.957314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.957345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.957525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.957556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.957792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.957823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.957941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.957973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.958252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.958286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.958456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.958486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.958618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.958649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.958762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.958794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.958995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.959027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.959195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.959227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.959394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.959425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.959618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.959648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.959902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.959934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.960222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.960255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.960437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.960467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.960633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.960664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.960854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.960885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.961060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.961092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.961288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.961320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.961500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.961530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.961657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.961688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.961950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.961981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.962154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.962197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.962389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.962420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.962606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.962637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.962807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.962837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.962955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.962987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.963120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.963150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.963269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.963301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.963467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.963499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.343 [2024-12-10 00:15:03.963687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.343 [2024-12-10 00:15:03.963718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.343 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.963888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.963919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.964093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.964124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.964319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.964352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.964467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.964498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.964600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.964630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.964805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.964836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.964955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.964986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.965099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.965131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.965381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.965453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.965726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.965762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.965955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.965989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.966177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.966211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.966328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.966360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.966587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.966618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.966787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.966817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.967087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.967117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.967395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.967428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.967608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.967638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.967763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.967793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.967970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.968001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.968133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.968174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.968344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.968376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.968654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.968686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.968904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.968936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.969214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.969247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.969526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.969557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.969730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.969761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.970024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.970055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.970341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.970373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.970497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.970528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.970717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.970749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.970862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.970893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.971001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.971033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.971236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.971269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.971473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.971503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.971613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.971650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.971819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.971850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.972041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.972074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.972259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.972292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.972474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.972506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.972676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.972706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.972880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.972911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.973108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.973141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.973272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.973304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.344 [2024-12-10 00:15:03.973490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.344 [2024-12-10 00:15:03.973521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.344 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.973764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.973795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.974006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.974036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.974210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.974242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.974362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.974393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.974680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.974712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.974909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.974940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.975205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.975238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.975447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.975479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.975724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.975755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.975932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.975963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.976071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.976103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.976224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.976256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.976516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.976548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.976729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.976759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.977022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.977052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.977187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.977220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.977404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.977436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.977619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.977651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.977758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.977788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.978058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.978090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.978210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.978242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.978423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.978455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.978630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.978662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.978860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.978893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.979131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.979170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.979354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.979385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.979557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.979587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.979759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.979791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.979971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.980002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.980118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.980150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.980397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.980434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.980630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.980661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.980845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.980875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.981044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.981077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.981359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.981392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.981667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.981699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.981983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.982015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.982206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.982241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.982482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.982513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.982802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.982834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.983024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.983057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.983175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.983207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.983398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.983430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.983612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.983642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.983834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.983866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.984152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.984193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.984461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.984494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.984706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.984737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.984929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.984961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.985067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.985098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.985302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.985334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.985502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.985532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.985728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.985759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.986019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.986050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.986340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.986373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.986576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.345 [2024-12-10 00:15:03.986608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.345 qpair failed and we were unable to recover it. 00:33:29.345 [2024-12-10 00:15:03.986794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.986825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.987027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.987059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.987330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.987363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.987541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.987573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.987832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.987863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.988053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.988085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.988302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.988334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.988620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.988652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.988899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.988930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.989136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.989174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.989368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.989399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.989508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.989539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.989706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.989736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.989908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.989941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.990207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.990246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.990441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.990474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.990658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.990689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.990888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.990920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.991136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.991177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.991352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.991384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.991565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.991595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.991868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.991900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.992190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.992224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.992414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.992447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.992631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.992663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.992770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.992801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.992969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.993001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.993282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.993315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.993496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.993528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.993720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.993751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.993858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.993889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.994005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.994035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.994242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.994275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.994458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.994490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.994601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.994632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.994800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.994831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.994943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.994973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.995170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.995204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.995375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.995406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.995604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.995635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.995810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.995842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.996039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.996070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.996192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.996224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.996345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.996377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.996495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.996526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.996718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.996750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.997003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.997034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.997152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.997194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.997382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.997414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.997612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.997644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.997929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.997960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.998130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.998187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.998455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.998487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.998724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.998756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.998952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.998989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.999232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.999265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.999450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.999482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.346 [2024-12-10 00:15:03.999653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.346 [2024-12-10 00:15:03.999685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.346 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:03.999789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:03.999821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.000012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.000044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.000286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.000320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.000491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.000522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.000788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.000819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.001023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.001055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.001266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.001298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.001506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.001538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.001708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.001740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.001961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.001993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.002134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.002174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.002460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.002492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.002603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.002635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.002907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.002939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.003045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.003076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.003339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.003372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.003557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.003589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.003763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.003794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.003963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.003995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.004098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.004132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.004337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.004369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.004638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.004671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.004949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.004981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.005302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.005335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.005621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.005653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.005830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.005862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.005968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.005999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.006201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.006234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.006360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.006393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.006508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.006539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.006804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.006835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.007037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.007068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.007326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.007359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.007535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.007567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.007738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.007770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.007951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.007983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.008195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.008234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.008429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.008461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.008568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.008598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.008769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.008801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.008997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.009028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.009171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.009204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.009471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.009503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.009610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.009642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.009914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.009946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.010123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.010154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.010300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.010333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.010598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.010629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.010917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.010949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.011226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.011259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.011543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.011575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.011748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.011779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.012024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.012055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.012232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.012265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.012452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.012484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.012693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.012723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.012971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.347 [2024-12-10 00:15:04.013002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.347 qpair failed and we were unable to recover it. 00:33:29.347 [2024-12-10 00:15:04.013272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.013304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.013477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.013509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.013755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.013786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.013956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.013988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.014156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.014219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.014408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.014439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.014622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.014653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.014826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.014858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.015039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.015071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.015248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.015281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.015574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.015606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.015834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.015867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.016043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.016073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.016248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.016281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.016456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.016488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.016618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.016649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.016780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.016811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.016992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.017022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.017202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.017236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.017409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.017447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.017571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.017601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.017782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.017813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.017988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.018021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.018196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.018227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.018403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.018434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.018610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.018640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.018960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.018993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.019283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.019316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.019428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.019460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.019668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.019700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.019967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.019999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.020299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.020332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.020508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.020540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.020789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.020820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.021073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.021104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.021307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.021341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.021461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.021492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.021675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.021705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.021908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.021939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.022114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.022145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.022326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.022359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.022626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.022658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.022837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.022869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.023135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.023179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.023372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.023404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.023597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.023629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.023740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.023771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.024015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.024047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.024237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.024269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.024516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.024548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.024727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.024757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.024934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.024965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.025174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.025206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.025452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.025485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.025680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.025712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.025908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.025940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.026204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.026238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.026551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.026584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.026759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.026790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.026969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.027007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.027185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.027218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.027326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.027358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.027552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.027583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.027699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.027731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.027999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.028031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.028317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.028350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.028630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.028662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.028924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.028955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.029135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.029176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.029299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.029330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.029448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.029480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.029631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.029662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.348 [2024-12-10 00:15:04.029835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.348 [2024-12-10 00:15:04.029868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.348 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.030145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.030187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.030376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.030408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.030586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.030617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.030892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.030925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.031098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.031129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.031409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.031443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.031562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.031593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.031838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.031870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.031995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.032026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.032199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.032232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.032411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.032443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.032736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.032767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.033022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.033054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.033193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.033227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.033496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.033529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.033656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.033686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.033859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.033890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.034108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.034138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.034428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.034461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.034661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.034692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.034882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.034912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.035178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.035211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.035512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.035545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.035722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.035754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.035956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.035988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.036184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.036215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.036416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.036453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.036668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.036699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.036974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.037006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.037137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.037178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.037288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.037324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.037457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.037487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.037667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.037699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.037804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.037836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.038035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.038068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.038242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.038274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.038487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.038518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.038719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.038750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.038948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.038981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.039251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.039285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.039578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.039610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.039866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.039897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.040026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.040057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.040254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.040309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.040520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.040553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.040749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.040781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.040983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.041014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.041142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.041197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.041450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.041480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.041607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.041637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.041845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.041878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.041986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.042018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.042148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.042190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.042426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.042504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.042739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.042775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.043052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.043084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.043285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.043319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.043591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.043623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.043798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.043830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.044010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.044041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.044349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.044385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.044645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.044677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.044944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.044976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.045154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.045197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.045389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.045420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.045637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.045669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.046002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.046033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.046221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.046254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.046448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.046480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.046656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.046687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.046880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.046911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.047091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.047122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.047310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.047344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.349 [2024-12-10 00:15:04.047451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.349 [2024-12-10 00:15:04.047482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.349 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.047698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.047730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.048003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.048034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.048238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.048271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.048565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.048597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.048867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.048897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.049100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.049131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.049265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.049304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.049525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.049556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.049835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.049867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.049989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.050021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.050223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.050256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.050435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.050467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.050733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.050765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.050949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.050980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.051179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.051212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.051416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.051448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.051721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.051751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.051872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.051903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.052042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.052073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.052292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.052326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.052558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.052591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.052770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.052801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.053073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.053105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.053447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.053480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.053686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.053718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.053968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.053999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.054113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.054144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.054316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.054349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.054553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.054584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.054785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.054816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.055011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.055042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.055223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.055255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.055380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.055413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.055636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.055675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.055877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.055909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.056095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.056127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.056335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.056373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.056506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.056538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.056657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.056687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.056936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.056967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.057181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.057213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.057346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.057377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.057572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.057605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.057789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.057821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.057951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.057981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.058242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.058276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.058454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.058486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.058601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.058634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.058813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.058845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.058975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.059007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.059125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.059167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.059356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.059389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.059669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.059701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.059974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.060006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.060306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.060339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.060558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.060589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.060805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.060837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.061021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.061052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.061230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.061263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.061440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.350 [2024-12-10 00:15:04.061471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.350 qpair failed and we were unable to recover it. 00:33:29.350 [2024-12-10 00:15:04.061683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.061719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.061834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.061866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.062041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.062074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.062254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.062288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.062543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.062576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.062704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.062736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.062984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.063016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.063143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.063188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.063315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.063347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.063536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.063568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.063679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.063711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.063889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.063921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.064201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.064233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.064490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.064522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.064716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.064748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.064964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.064995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.065182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.065216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.065415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.065446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.065647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.065679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.065842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.065874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.065996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.066027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.066304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.066337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.066614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.066646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.066931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.066963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.067148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.067189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.067369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.067402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.067674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.067707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.067913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.067946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.068183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.068215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.068463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.068496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.068772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.068804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.069011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.069043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.069220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.069253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.069443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.069474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.069602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.069634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.069753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.069785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.069962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.069994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.070182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.070214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.070403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.070435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.070615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.070647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.070844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.070876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.071007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.071039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.071350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.071382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.071509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.071542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.071791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.071823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.072003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.072035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.072153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.072195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.072401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.072433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.072572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.072604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.072884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.072916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.073096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.073127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.073318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.073351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.073475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.073507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.073696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.073727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.073930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.073967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.074081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.074113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.074326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.074359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.074568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.351 [2024-12-10 00:15:04.074600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.351 qpair failed and we were unable to recover it. 00:33:29.351 [2024-12-10 00:15:04.074869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.074901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.075081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.075113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.075302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.075335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.075539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.075570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.075692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.075723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.075927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.075959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.076237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.076270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.076450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.076482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.076601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.076632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.076913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.076945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.077129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.077170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.077354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.077386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.077515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.077547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.077727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.077759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.078010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.078041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.078237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.078271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.078464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.078496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.078711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.078744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.079007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.079039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.079232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.079264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.079519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.079551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.079852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.079884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.080085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.080117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.080332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.080378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.080516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.080549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.080824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.080857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.081042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.081074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.081309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.081342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.081639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.081671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.081848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.081880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.082057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.082089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.082300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.082334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.082534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.082566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.082781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.082813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.083070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.083101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.083302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.083335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.083512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.083544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.083822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.083853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.084131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.084173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.084358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.084390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.084689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.084721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.084899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.084930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.085207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.085240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.085570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.085602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.085876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.085908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.086208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.086241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.086462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.086494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.086744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.086776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.086977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.087010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.087288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.087320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.087520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.087553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.087682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.087715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.087995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.088028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.088278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.088312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.088443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.088476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.088654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.088687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.088804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.088837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.088961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.088994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.089268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.089303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.089426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.089458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.089636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.089669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.089845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.352 [2024-12-10 00:15:04.089877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.352 qpair failed and we were unable to recover it. 00:33:29.352 [2024-12-10 00:15:04.090100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.090133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.090354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.090387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.090596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.090630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.090895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.090926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.091106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.091138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.091396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.091430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.091538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.091569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.091821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.091853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.091988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.092020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.092226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.092258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.092375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.092407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.092614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.092646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.092754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.092786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.092990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.093022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.093225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.093258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.093511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.093543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.093751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.093783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.093958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.093990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.094246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.094278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.094459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.094491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.094603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.094634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.094858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.094889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.095010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.095042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.095318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.095350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.095625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.095657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.095953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.095985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.096257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.096289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.096513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.096545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.096797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.096830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.097109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.097146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.097363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.097395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.097590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.097622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.097901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.097933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.098113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.098144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.098285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.098317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.098530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.098562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.098762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.098794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.098988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.099020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.099140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.099184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.099382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.099414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.099544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.099576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.099759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.099790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.099993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.100025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.100216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.100250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.100519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.100551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.100754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.100786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.100965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.100996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.101269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.101302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.101576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.101607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.101834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.101866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.102092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.102124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.102422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.102456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.102742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.102774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.102895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.102926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.103129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.103170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.103348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.103380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.103630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.103668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.103919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.103951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.104131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.104173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.104353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.104385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.104662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.104694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.104886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.104918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.353 [2024-12-10 00:15:04.105043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.353 [2024-12-10 00:15:04.105075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.353 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.105260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.105293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.105473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.105505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.105792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.105824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.106139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.106184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.106487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.106518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.106700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.106732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.106854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.106886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.107016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.107048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.107271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.107304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.107430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.107462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.107662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.107694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.107896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.107928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.108120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.108153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.108343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.108375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.108645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.108677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.108870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.108901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.109082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.109114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.109396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.109429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.109710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.109742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.110030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.110062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.110271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.110311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.110496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.110528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.110660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.110691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.110949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.110981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.111180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.111213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.111391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.111423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.111549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.111580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.111700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.111732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.111939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.111971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.112193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.112227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.112421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.112453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.112702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.112734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.112966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.112997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.113110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.113142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.113394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.113426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.113533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.113565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.113844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.113876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.114146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.114191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.114449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.114481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.114673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.114703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.114981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.115013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.115301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.115334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.115610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.115642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.115821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.115853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.116045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.116076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.116361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.116395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.116701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.116732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.116990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.117022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.117253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.117286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.117467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.117499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.117678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.117710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.117839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.117871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.118004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.118037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.118233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.118266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.118448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.118479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.118608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.118640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.118815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.118847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.119027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.119059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.119262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.119297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.119482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.119513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.119695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.119727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.119938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.119972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.120081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.120112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.120305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.120338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.120465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.120495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.120749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.120781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.120895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.120927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.121130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.121171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.354 [2024-12-10 00:15:04.121299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.354 [2024-12-10 00:15:04.121332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.354 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.121511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.121543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.121796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.121828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.121948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.121981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.122261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.122295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.122574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.122606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.122878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.122910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.123098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.123130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.123421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.123453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.123744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.123775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.124029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.124061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.124188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.124222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.124495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.124527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.124638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.124670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.124944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.124976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.125182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.125215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.125440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.125472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.125580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.125609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.125860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.125891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.126200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.126233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.126426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.126464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.126639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.126672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.126922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.126954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.127093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.127124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.127408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.127441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.127554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.127585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.127778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.127810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.128059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.128090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.128301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.128341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.128528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.128560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.128740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.128772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.128954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.128986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.129274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.129307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.129489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.129521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.129742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.129775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.129988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.130019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.130167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.130201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.130327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.130360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.130577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.130609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.130788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.130820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.131097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.131129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.131469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.131537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.131865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.131908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.132124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.132178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.132476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.132516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.132696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.132736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.133024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.133064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.133294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.133346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.133597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.133635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.133853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.133892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.134220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.134263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.134485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.134524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.134755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.134794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.135025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.135062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.135273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.135307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.135583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.135615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.135792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.135824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.136097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.136129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.136356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.136388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.136592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.136624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.136818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.136849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.137059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.137092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.137303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.137336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.137594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.137626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.137806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.137837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.138028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.138060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.138208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.138241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.355 [2024-12-10 00:15:04.138424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.355 [2024-12-10 00:15:04.138455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.355 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.138715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.138747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.139000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.139032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.139212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.139245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.139453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.139484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.139612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.139643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.139848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.139879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.140084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.140122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.140413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.140446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.140646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.140677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.140852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.140884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.141182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.141216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.141400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.141432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.141612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.141644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.141823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.141855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.141984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.142016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.142202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.142236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.142440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.142472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.142673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.142705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.142957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.142988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.143126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.143167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.143367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.143398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.143671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.143703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.143880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.143912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.144185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.144218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.144471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.144507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.144707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.144738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.144986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.145017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.145228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.145262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.145390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.145421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.145535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.145566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.145762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.145794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.146065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.146096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.146228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.146261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.146436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.146473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.146668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.146700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.146877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.146909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.147128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.147171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.147285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.147317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.147540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.147572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.147754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.147785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.147961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.147993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.148189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.148222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.148474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.148507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.148615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.148647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.148862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.148894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.149096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.149128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.149414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.149446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.149566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.149598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.149853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.149885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.150084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.150115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.150347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.150380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.150563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.150594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.150789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.150820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.151024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.151056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.151292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.151326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.151602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.151633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.151823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.151855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.152055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.152087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.152193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.152229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.152337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.152369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.152552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.152584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.152844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.152876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.153167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.153200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.153384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.153416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.153609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.153641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.153905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.153936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.154139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.154181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.154460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.154492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.154673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.154704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.154882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.154913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.155029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.155061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.155340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.155373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.155681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.155714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.356 [2024-12-10 00:15:04.155869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.356 [2024-12-10 00:15:04.155900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.356 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.156182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.156217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.156410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.156442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.156623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.156654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.156863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.156894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.157018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.157049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.157178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.157211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.157412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.157443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.157645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.157677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.160393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.160429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.160701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.160734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.160935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.160966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.161091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.161123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.161262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.161295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.161502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.161533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.161840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.161872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.162170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.162203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.162474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.162505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.162786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.162818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.163016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.163047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.163293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.163326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.163447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.163477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.163748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.163780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.163957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.163988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.164106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.164137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.164415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.164448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.164621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.164652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.164949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.164980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.165170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.165208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.165488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.165519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.165718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.165750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.165964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.165995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.166223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.166258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.166471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.166502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.166685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.166718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.166840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.166871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.167172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.167205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.167330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.167362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.167492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.167523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.167640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.167671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.167944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.167976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.168150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.168195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.168511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.168544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.168687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.168718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.168904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.168935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.169137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.169180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.169358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.169389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.169591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.169622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.169905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.169936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.170216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.170249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.170537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.170569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.170815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.170846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.171124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.171155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.171385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.171417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.171615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.171647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.171769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.171806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.172013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.172046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.172321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.172353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.172611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.172642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.172899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.172930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.173181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.173213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.173408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.173440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.173744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.173775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.174036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.174068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.174257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.174291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.174565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.174596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.174718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.174749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.175020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.175053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.175171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.175204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.175482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.175514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.175764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.175795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.176061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.176093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.176273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.176305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.357 qpair failed and we were unable to recover it. 00:33:29.357 [2024-12-10 00:15:04.176578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.357 [2024-12-10 00:15:04.176610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.176833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.176865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.177039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.177070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.177342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.177374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.177521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.177553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.177802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.177834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.177959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.177989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.178195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.178228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.178423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.178456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.178633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.178664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.178944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.178977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.179152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.179195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.179450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.179482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.179688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.179719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.179839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.179871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.179980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.180011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.180282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.180314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.180491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.180523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.180777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.180809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.181021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.181053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.181300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.181333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.181512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.181543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.181744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.181775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.181912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.181944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.182215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.182248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.182441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.182473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.182653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.182684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.182959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.182991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.183187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.183218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.183479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.183512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.183701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.183731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.183982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.184013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.184206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.184239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.184434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.184465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.184641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.184673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.184858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.184891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.185082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.185113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.185409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.185442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.185653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.185685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.185863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.185894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.186111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.186143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.186399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.186431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.186608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.186640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.186817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.186849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.187049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.187081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.187202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.187236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.187429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.187461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.187733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.187764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.188015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.188048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.188226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.188258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.188536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.188573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.188759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.188790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.188996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.189028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.189153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.189195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.189385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.189417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.189597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.189628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.189736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.189767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.189963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.189995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.190121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.190153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.190312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.190344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.190521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.190552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.190825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.190857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.191062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.191093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.191215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.191249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.191458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.191491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.191669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.191701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.191884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.358 [2024-12-10 00:15:04.191915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.358 qpair failed and we were unable to recover it. 00:33:29.358 [2024-12-10 00:15:04.192030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.192062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.192254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.192287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.192416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.192446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.192724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.192755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.192949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.192980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.193242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.193274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.193454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.193487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.193665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.193697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.193815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.193847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.194070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.194101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.194252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.194291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.194484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.194515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.194671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.194703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.194923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.194954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.195081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.195113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.195245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.195279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.195460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.195491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.195666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.195697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.195910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.195942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.196202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.196233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.196426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.196458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.196735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.196767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.196894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.196926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.197056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.197088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.197283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.197317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.197524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.197555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.197765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.197797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.198093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.198124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.198316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.198349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.198631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.198663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.198793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.198823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.199113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.199145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.199382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.199414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.199691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.199722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.199948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.199979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.200180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.200212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.200463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.200494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.200673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.200710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.200887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.200918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.201106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.201137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.201346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.201379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.201506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.201536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.201660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.201691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.201893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.201924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.202200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.202233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.202433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.202464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.202736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.202768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.202899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.202930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.203059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.203091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.203371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.203403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.203595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.203627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.203896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.203928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.204036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.204068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.204201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.204233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.204499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.204531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.204708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.204740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.205017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.205049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.205327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.205359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.205553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.205584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.205840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.205872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.206074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.206105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.206297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.206330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.206535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.206566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.206866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.206898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.207105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.207137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.207448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.207482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.207600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.207630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.207905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.207936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.208144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.208189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.208318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.208350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.208458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.359 [2024-12-10 00:15:04.208489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.359 qpair failed and we were unable to recover it. 00:33:29.359 [2024-12-10 00:15:04.208609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.208641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.208777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.208808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.208985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.209016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.209211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.209245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.209438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.209469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.209722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.209755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.209901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.209932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.210138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.210192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.210326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.210357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.210466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.210498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.210699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.210730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.210932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.210964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.211252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.211286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.211408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.211440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.211625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.211656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.211934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.211966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.212186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.212218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.212485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.212517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.212785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.212815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.212996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.213028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.213206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.213238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.213368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.213400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.213575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.213605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.213728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.213760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.213976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.214006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.214207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.214240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.214359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.214391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.214669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.214700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.214903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.214935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.215114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.215145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.215332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.215364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.360 [2024-12-10 00:15:04.215559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.360 [2024-12-10 00:15:04.215590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.360 qpair failed and we were unable to recover it. 00:33:29.656 [2024-12-10 00:15:04.215784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.656 [2024-12-10 00:15:04.215816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-12-10 00:15:04.215929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.656 [2024-12-10 00:15:04.215961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-12-10 00:15:04.216212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.656 [2024-12-10 00:15:04.216250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-12-10 00:15:04.216454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.656 [2024-12-10 00:15:04.216486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-12-10 00:15:04.216695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.656 [2024-12-10 00:15:04.216726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-12-10 00:15:04.216904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.656 [2024-12-10 00:15:04.216936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-12-10 00:15:04.217121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.656 [2024-12-10 00:15:04.217152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-12-10 00:15:04.217423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.656 [2024-12-10 00:15:04.217454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-12-10 00:15:04.217643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.656 [2024-12-10 00:15:04.217676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-12-10 00:15:04.217794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.656 [2024-12-10 00:15:04.217826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.656 qpair failed and we were unable to recover it. 00:33:29.656 [2024-12-10 00:15:04.218019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.218050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.218229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.218264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.218446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.218477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.218587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.218618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.218888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.218921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.219097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.219128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.219401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.219434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.219712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.219743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.219856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.219888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.220089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.220120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.220329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.220362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.220614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.220645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.220863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.220895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.221142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.221183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.221421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.221453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.221638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.221670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.221851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.221883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.222196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.222230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.222409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.222441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.222582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.222618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.222802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.222834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.223109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.223140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.223432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.223465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.223672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.223703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.223904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.223935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.224056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.224087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.224360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.224394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.224673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.224705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.224990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.225022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.225206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.225239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.225361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.225395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.225508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.225540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.225758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.225789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.226070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.226103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.226253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.226286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.226483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.226515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.226626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.226658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.226859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.226891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.657 qpair failed and we were unable to recover it. 00:33:29.657 [2024-12-10 00:15:04.227193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.657 [2024-12-10 00:15:04.227226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.227488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.227520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.227769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.227801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.228048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.228080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.228332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.228366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.228572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.228603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.228799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.228830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.229016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.229047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.229229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.229262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.229514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.229546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.229652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.229684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.229863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.229894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.230075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.230105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.230233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.230266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.230518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.230551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.230798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.230830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.231006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.231038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.231261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.231294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.231472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.231504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.231751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.231783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.231974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.232005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.232185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.232219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.232354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.232385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.232517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.232549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.232726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.232758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.233031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.233062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.233264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.233296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.233511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.233542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.233662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.233694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.233952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.233984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.234274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.234307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.234547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.234579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.234710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.234741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.234941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.234972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.235088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.235120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.235376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.235409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.235689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.235722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.658 [2024-12-10 00:15:04.235925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.658 [2024-12-10 00:15:04.235958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.658 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.236138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.236182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.236451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.236483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.236606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.236638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.236914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.236945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.237217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.237250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.237440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.237472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.237660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.237691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.237799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.237830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.238102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.238134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.238322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.238354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.238509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.238540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.238721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.238759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.238964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.238995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.239120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.239152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.239354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.239386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.239665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.239697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.239807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.239838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.240058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.240089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.240302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.240335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.240454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.240486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.240665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.240696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.240801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.240834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.241067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.241099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.241239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.241273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.241526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.241557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.241674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.241706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.241904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.241936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.242145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.242208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.242388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.242420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.242623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.242655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.242868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.242900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.243077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.243109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.243395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.243427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.243728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.243760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.243880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.243912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.244192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.244225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.244453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.244485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.244706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.244737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.659 qpair failed and we were unable to recover it. 00:33:29.659 [2024-12-10 00:15:04.244986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.659 [2024-12-10 00:15:04.245023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.245290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.245323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.245532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.245563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.245742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.245774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.245957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.245989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.246195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.246228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.246356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.246387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.246585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.246617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.246825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.246857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.247039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.247070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.247275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.247308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.247418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.247450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.247569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.247600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.247786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.247818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.248003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.248035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.248295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.248349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.248634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.248666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.248907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.248939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.249121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.249153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.249368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.249400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.249529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.249560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.249873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.249905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.250038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.250069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.250341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.250374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.250557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.250588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.250765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.250796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.250975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.251007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.251178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.251217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.251326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.251356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.251491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.251523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.251796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.251827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.252025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.252056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.252245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.252279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.252504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.252536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.252728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.252760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.252961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.252994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.253190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.253222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.660 [2024-12-10 00:15:04.253519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.660 [2024-12-10 00:15:04.253552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.660 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.253753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.253786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.253989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.254021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.254194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.254227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.254420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.254496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.254820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.254857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.255065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.255097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.255297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.255331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.255604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.255636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.255917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.255949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.256128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.256169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.256446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.256478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.256759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.256790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.257080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.257112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.257352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.257385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.257568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.257600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.257721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.257752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.257883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.257925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.258102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.258134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.258349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.258383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.258587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.258619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.258889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.258921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.259041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.259073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.259321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.259354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.259633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.259665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.259863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.259895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.260174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.260207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.260495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.260528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.260654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.260686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.260884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.260916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.261111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.261142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.261350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.261382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.261585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.261616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.261818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.661 [2024-12-10 00:15:04.261850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.661 qpair failed and we were unable to recover it. 00:33:29.661 [2024-12-10 00:15:04.262048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.262079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.262280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.262313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.262539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.262571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.262847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.262878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.263004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.263035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.263215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.263250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.263442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.263474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.263607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.263638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.263936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.263967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.264179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.264211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.264352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.264384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.264521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.264553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.264674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.264705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.264904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.264935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.265116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.265148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.265359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.265391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.265522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.265555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.265806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.265838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.266018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.266049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.266244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.266277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.266459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.266492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.266671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.266703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.266824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.266855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.266982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.267020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.267319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.267352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.267513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.267544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.267724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.267756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.268030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.268062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.268246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.268280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.268481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.268512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.268786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.268818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.269072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.269103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.269419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.269451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.269722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.269754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.270006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.270037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.270353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.270386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.270595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.270626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.270812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.270844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.662 [2024-12-10 00:15:04.270963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.662 [2024-12-10 00:15:04.270994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.662 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.271283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.271316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.271455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.271486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.271762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.271794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.271971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.272002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.272195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.272228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.272535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.272566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.272851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.272883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.272999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.273029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.273299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.273332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.273531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.273562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.273739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.273771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.273953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.273985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.274178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.274211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.274332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.274364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.274618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.274649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.274828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.274859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.275060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.275092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.275273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.275306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.275501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.275532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.275663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.275695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.275896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.275928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.276187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.276220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.276342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.276374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.276574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.276605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.276724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.276761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.277019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.277050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.277339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.277371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.277552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.277583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.277761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.277793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.278060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.278091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.278272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.278304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.278497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.278528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.278712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.278742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.278873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.278904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.279102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.279134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.279419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.279451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.279683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.279715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.279921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.279953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.663 [2024-12-10 00:15:04.280184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.663 [2024-12-10 00:15:04.280218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.663 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.280469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.280501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.280705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.280737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.281012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.281044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.281242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.281275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.281527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.281559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.281767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.281798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.281979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.282011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.282133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.282172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.282310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.282342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.282520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.282552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.282731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.282762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.283054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.283086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.283374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.283407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.283687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.283719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.283957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.283990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.284201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.284235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.284510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.284542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.284652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.284683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.284883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.284915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.285094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.285126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.285322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.285356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.285581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.285613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.285891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.285923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.286182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.286217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.286414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.286445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.286720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.286764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.286966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.286998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.287119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.287152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.287389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.287422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.287700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.287731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.287859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.287891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.288011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.288044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.288255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.288289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.288415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.288448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.288648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.288679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.288881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.288914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.289096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.664 [2024-12-10 00:15:04.289130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.664 qpair failed and we were unable to recover it. 00:33:29.664 [2024-12-10 00:15:04.289330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.289365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.289592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.289624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.289921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.289953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.290065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.290096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.290315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.290347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.290541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.290572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.290774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.290806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.290932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.290962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.291257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.291290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.291472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.291504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.291711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.291741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.291862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.291894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.292191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.292225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.292492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.292525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.292662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.292693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.292905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.292938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.293141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.293180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.293374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.293407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.293540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.293572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.293695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.293727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.293950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.293982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.294239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.294272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.294536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.294569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.294779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.294810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.295007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.295038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.295175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.295209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.295467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.295500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.295702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.295735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.295941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.295979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.296351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.296384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.296661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.665 [2024-12-10 00:15:04.296693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.665 qpair failed and we were unable to recover it. 00:33:29.665 [2024-12-10 00:15:04.296831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.296862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.297135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.297176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.297433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.297464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.297658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.297689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.297879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.297910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.298170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.298203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.298337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.298369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.298644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.298676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.298856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.298888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.299088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.299120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.299268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.299300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.299513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.299545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.299741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.299772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.300050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.300081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.300290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.300323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.300500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.300531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.300706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.300738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.300871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.300902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.301108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.301139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.301402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.301435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.301554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.301585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.301708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.301740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.301923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.301955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.302170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.302204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.302381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.302419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.302563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.302595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.302875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.302906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.303181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.303214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.303483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.303516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.303645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.303676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.303785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.303816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.304020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.304052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.304230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.304262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.304465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.304497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.304787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.304819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.304996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.305028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.305230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.305264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.305536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.305569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.666 [2024-12-10 00:15:04.305706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.666 [2024-12-10 00:15:04.305737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.666 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.305916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.305948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.306068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.306099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.306237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.306270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.306460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.306493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.306792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.306825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.307091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.307121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.307345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.307378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.307638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.307671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.307875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.307907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.308052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.308083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.308293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.308325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.308460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.308492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.308695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.308727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.308954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.308984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.309113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.309144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.309348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.309382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.309583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.309615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.309730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.309763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.309956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.309988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.310285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.310318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.310500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.310532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.310722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.310755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.310883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.310915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.311094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.311126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.311324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.311357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.311559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.311597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.311837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.311869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.312175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.312209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.312497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.312529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.312706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.312737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.312939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.312972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.313182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.313219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.313351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.313382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.313584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.313617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.313834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.313866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.313976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.314007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.314298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.314331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.314439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.667 [2024-12-10 00:15:04.314472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.667 qpair failed and we were unable to recover it. 00:33:29.667 [2024-12-10 00:15:04.314600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.314632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.314929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.314961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.315164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.315196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.315323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.315355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.315584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.315615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.315746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.315777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.316055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.316086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.316229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.316262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.316562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.316594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.316854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.316884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.317005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.317038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.317324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.317356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.317489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.317520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.317662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.317694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.317902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.317938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.318117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.318149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.318443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.318476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.318607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.318640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.318768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.318800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.318954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.318985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.319193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.319227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.319425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.319456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.319650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.319681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.319967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.319998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.320203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.320237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.320372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.320403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.320676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.320707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.320836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.320874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.321057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.321089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.321298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.321330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.321512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.321544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.321725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.321756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.322041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.322073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.322259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.322292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.322426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.322458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.322638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.322670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.322866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.322900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.323080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.323111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.323334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.668 [2024-12-10 00:15:04.323368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.668 qpair failed and we were unable to recover it. 00:33:29.668 [2024-12-10 00:15:04.323494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.323525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.323679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.323710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.323932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.323966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.324171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.324205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.324385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.324417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.324651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.324684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.324886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.324918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.325122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.325155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.325288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.325321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.325451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.325482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.325597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.325629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.325751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.325783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.326065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.326097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.326234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.326269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.326472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.326504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.326634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.326666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.326861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.326893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.327086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.327119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.327274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.327307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.327500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.327532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.327688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.327720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.328021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.328053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.328235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.328269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.328487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.328519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.328726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.328762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.328940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.328973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.329080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.329111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.329335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.329368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.329485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.329524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.329706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.329738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.329932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.329963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.330097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.330128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.330337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.330369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.330486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.330518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.330643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.330675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.330791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.330822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.330930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.330962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.669 qpair failed and we were unable to recover it. 00:33:29.669 [2024-12-10 00:15:04.331192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.669 [2024-12-10 00:15:04.331225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.331517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.331550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.331678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.331710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.331829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.331861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.332062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.332093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.332275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.332307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.332495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.332526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.332645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.332676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.332998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.333030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.333261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.333293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.333499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.333532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.333725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.333757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.334029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.334061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.334353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.334388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.334637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.334668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.334864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.334898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.335016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.335053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.335253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.335288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.335518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.335552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.335672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.335704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.335928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.335959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.336196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.336230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.336374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.336406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.336588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.336619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.336741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.336773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.337059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.337090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.337286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.337319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.337465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.337498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.337690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.337723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.337938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.337970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.338181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.338214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.670 [2024-12-10 00:15:04.338444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.670 [2024-12-10 00:15:04.338488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.670 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.338718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.338750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.338945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.338976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.339181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.339217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.339453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.339488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.339633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.339665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.339892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.339925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.340052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.340083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.340207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.340241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.340379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.340411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.340534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.340565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.340836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.340867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.341002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.341034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.341239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.341272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.341553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.341585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.341717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.341750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.341951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.341982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.342135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.342189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.342325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.342356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.342581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.342614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.342754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.342786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.342971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.343002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.343135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.343174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.343330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.343361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.343641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.343673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.343893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.343925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.344128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.344171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.344452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.344485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.344761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.344793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.345015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.345050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.345246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.345279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.345467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.345499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.345683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.345714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.345905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.345937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.346218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.346254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.346392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.346424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.346608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.346641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.346942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.671 [2024-12-10 00:15:04.346973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.671 qpair failed and we were unable to recover it. 00:33:29.671 [2024-12-10 00:15:04.347241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.347273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.347478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.347510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.347783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.347821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.347972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.348004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.348121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.348153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.348365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.348397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.348521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.348554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.348743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.348776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.348979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.349010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.349117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.349147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.349266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.349299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.349482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.349514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.349695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.349727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.349909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.349940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.350118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.350149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.350292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.350324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.350524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.350556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.350682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.350714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.350900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.350932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.351173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.351208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.351338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.351370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.351491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.351524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.351673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.351705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.351909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.351941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.352120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.352152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.352415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.352448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.352657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.352689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.352918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.352951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.353229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.353262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.353549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.353582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.353788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.353820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.353930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.353962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.354166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.354199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.354392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.354425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.354634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.354666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.354928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.354961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.672 [2024-12-10 00:15:04.355144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.672 [2024-12-10 00:15:04.355186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.672 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.355364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.355396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.355597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.355628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.355784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.355816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.356028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.356060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.356188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.356220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.356473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.356510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.356649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.356681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.356862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.356894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.357018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.357050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.357174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.357208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.357333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.357365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.357477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.357508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.357641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.357673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.357924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.357956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.358149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.358191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.358376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.358407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.358540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.358572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.358703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.358735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.358911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.358942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.359127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.359169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.359293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.359325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.359464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.359495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.359674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.359705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.360017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.360050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.360233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.360266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.360493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.360525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.360779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.360811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.360955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.360986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.361174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.361207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.361328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.361360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.361546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.361578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.361795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.361826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.362115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.362148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.362312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.362344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.362470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.362502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.362799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.362832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.673 qpair failed and we were unable to recover it. 00:33:29.673 [2024-12-10 00:15:04.363059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.673 [2024-12-10 00:15:04.363091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.363249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.363282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.363415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.363447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.363649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.363681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.363914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.363947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.364129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.364194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.364326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.364357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.364570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.364603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.364791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.364822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.364999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.365038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.365234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.365268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.365459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.365490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.365667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.365698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.365835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.365867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.365987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.366019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.366322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.366354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.366535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.366567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.366860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.366892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.367205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.367238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.367391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.367422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.367627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.367659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.367955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.367987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.368198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.368232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.368363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.368394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.368589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.368620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.368818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.368850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.369054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.369087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.369281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.369313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.369496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.369528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.369651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.369683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.369966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.369997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.370117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.370149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.370265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.370298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.370414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.370447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.370646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.370678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.370932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.370963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.371149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.371191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.371372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.371404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.674 [2024-12-10 00:15:04.371529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.674 [2024-12-10 00:15:04.371561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.674 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.371755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.371787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.371966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.371999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.372302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.372336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.372548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.372579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.372825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.372858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.372969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.373002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.373237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.373270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.373473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.373505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.373760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.373793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.374107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.374139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.374424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.374462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.374665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.374697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.374899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.374930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.375129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.375170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.375295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.375327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.375549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.375581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.375784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.375816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.375995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.376027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.376274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.376307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.376443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.376475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.376606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.376638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.376855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.376888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.377187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.377221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.377403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.377435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.377647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.377679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.377943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.377975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.378185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.378219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.378404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.378436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.378591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.378623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.378888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.378919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.379177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.379209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.379461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.379493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.379693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.379726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.379843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.379875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.675 qpair failed and we were unable to recover it. 00:33:29.675 [2024-12-10 00:15:04.380076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.675 [2024-12-10 00:15:04.380107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.380299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.380334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.380548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.380580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.380787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.380819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.381076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.381108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.381264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.381297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.381574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.381607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.381820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.381852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.382104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.382137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.382278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.382311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.382504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.382537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.382666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.382698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.382823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.382855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.382976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.383008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.383224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.383258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.383508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.383540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.383675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.383712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.383831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.383864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.384140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.384183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.384322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.384354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.384552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.384585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.384718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.384750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.384937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.384969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.385147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.385188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.385391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.385422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.385568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.385600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.385798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.385830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.386025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.386056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.386192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.386226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.386431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.386463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.386673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.386705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.386884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.386915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.387092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.387123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.387294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.387328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.387553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.387585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.387768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.387799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.387918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.387950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.388087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.388119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.388383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.676 [2024-12-10 00:15:04.388416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.676 qpair failed and we were unable to recover it. 00:33:29.676 [2024-12-10 00:15:04.388689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.388721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.388859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.388891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.389074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.389106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.389239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.389271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.389404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.389436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.389569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.389600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.389786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.389818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.390024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.390057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.390242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.390277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.390425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.390457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.390590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.390621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.390746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.390778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.390907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.390939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.391238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.391270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.391538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.391571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.391757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.391789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.391987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.392019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.392196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.392234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.392412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.392444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.392578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.392610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.392865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.392897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.393021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.393053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.393179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.393213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.393367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.393400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.393576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.393608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.393918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.393949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.394106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.394139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.394438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.394470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.394665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.394698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.394886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.394918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.395117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.395149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.395348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.395381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.395563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.395595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.395791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.395823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.396102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.396134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.396348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.396382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.396598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.396631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.396877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.677 [2024-12-10 00:15:04.396909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.677 qpair failed and we were unable to recover it. 00:33:29.677 [2024-12-10 00:15:04.397113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.397146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.397417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.397449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.397738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.397770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.397961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.397994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.398179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.398213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.398394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.398425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.398739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.398819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.399035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.399070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.399238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.399273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.399419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.399452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.399665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.399696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.399817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.399848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.400058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.400090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.400335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.400367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.400594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.400626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.400853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.400885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.401063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.401094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.401312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.401346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.401524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.401555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.401701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.401733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.401868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.401901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.402145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.402188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.402322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.402354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.402538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.402570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.402819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.402852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.402964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.402995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.403182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.403215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.403407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.403438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.403558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.403589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.403719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.403749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.403971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.404002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.404230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.404263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.404396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.404428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.404707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.404745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.405053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.405085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.405309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.405342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.405462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.405493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.678 qpair failed and we were unable to recover it. 00:33:29.678 [2024-12-10 00:15:04.405683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.678 [2024-12-10 00:15:04.405713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.405895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.405928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.406133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.406175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.406310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.406343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.406495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.406527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.406711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.406742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.406952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.406984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.407194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.407228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.407412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.407445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.407627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.407658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.407982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.408014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.408296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.408328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.408606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.408637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.408823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.408854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.408984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.409016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.409176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.409210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.409331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.409364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.409559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.409591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.409795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.409826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.410001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.410033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.410215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.410247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.410442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.410472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.410678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.410710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.410987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.411026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.411209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.411241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.411515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.411546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.411723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.411754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.411965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.411996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.412193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.412225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.412403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.412436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.412657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.412689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.412795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.412825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.413080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.413112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.413411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.413443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.413556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.679 [2024-12-10 00:15:04.413587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.679 qpair failed and we were unable to recover it. 00:33:29.679 [2024-12-10 00:15:04.413719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.413750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.414044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.414076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.414192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.414224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.414428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.414460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.414588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.414620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.414824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.414855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.415056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.415087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.415216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.415249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.415449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.415480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.415685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.415716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.416041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.416073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.416265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.416297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.416478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.416509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.416631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.416662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.416928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.416960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.417182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.417221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.417344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.417376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.417504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.417535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.417664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.417695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.417894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.417924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.418102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.418133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.418318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.418351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.418483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.418515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.418737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.418768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.418897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.418928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.419115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.419146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.419351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.419383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.419502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.419534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.419788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.419819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.419949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.419981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.420104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.420137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.420439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.420472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.420609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.420640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.420779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.420810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.421004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.421036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.421180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.421213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.421353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.421385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.421504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.421535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.421671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.421703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.680 [2024-12-10 00:15:04.421885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.680 [2024-12-10 00:15:04.421916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.680 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.422186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.422220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.422403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.422436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.422612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.422644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.422986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.423018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.423201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.423233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.423437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.423468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.423621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.423652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.423853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.423884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.424140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.424180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.424295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.424327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.424443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.424474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.424674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.424705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.424830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.424861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.424973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.425004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.425200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.425232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.425356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.425386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.425646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.425683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.425864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.425896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.426102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.426134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.426406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.426440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.426747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.426778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.427051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.427083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.427285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.427318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.427428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.427459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.427633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.427665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.427940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.427971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.428171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.428204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.428384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.428415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.428602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.428632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.428856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.428887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.429039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.429070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.429343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.429375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.429557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.429589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.429779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.429810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.430059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.430091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.430197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.430228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.430406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.430438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.430614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.430645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.430867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.681 [2024-12-10 00:15:04.430899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.681 qpair failed and we were unable to recover it. 00:33:29.681 [2024-12-10 00:15:04.431078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.431110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.431330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.431363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.431495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.431526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.431655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.431687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.431869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.431906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.432082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.432113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.432329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.432363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.432543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.432574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.432749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.432780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.433092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.433124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.433282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.433314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.433512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.433544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.433768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.433799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.433999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.434031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.434218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.434252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.434469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.434501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.434683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.434714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.434956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.434986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.435202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.435235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.435367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.435399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.435606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.435638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.435774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.435805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.436085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.436117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.436330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.436364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.436494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.436527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.436658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.436690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.436800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.436831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.437047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.437080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.437220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.437251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.437444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.437475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.437661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.437693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.437874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.437912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.438175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.438209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.438351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.438386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.438504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.438535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.438742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.438775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.438885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.438915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.439091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.439123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.682 [2024-12-10 00:15:04.439317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.682 [2024-12-10 00:15:04.439351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.682 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.439535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.439567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.439707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.439738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.439847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.439879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.440058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.440090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.440232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.440266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.440406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.440438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.440634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.440666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.440795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.440826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.441009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.441041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.441193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.441225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.441341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.441375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.441594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.441626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.441742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.441773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.441954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.441986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.442284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.442316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.442531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.442562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.442819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.442851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.442967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.442998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.443181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.443213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.443330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.443361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.443545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.443576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.443696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.443728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.443948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.443980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.444188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.444221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.444355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.444385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.444612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.444644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.444879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.444910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.445107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.445140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.445369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.445402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.445538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.445578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.445707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.445740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.445858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.445890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.446067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.446099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.446299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.446333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.446441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.446473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.446589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.446621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.446752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.683 [2024-12-10 00:15:04.446783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.683 qpair failed and we were unable to recover it. 00:33:29.683 [2024-12-10 00:15:04.446895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.446927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.447196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.447229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.447369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.447401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.447522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.447554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.447668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.447700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.447898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.447930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.448143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.448184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.448367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.448400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.448602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.448632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.448831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.448863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.449044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.449078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.449208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.449241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.449371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.449407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.449516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.449548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.449739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.449775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.449977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.450010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.450216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.450253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.450368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.450400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.450512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.450545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.450679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.450712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.450829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.450861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.451072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.451110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.451353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.451385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.451589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.451629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.451827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.451858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.452063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.452095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.452262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.452295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.452471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.452503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.452617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.452648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.452786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.452818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.452946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.452977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.453107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.453138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.453353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.453386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.453588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.453620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.453747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.453779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.684 [2024-12-10 00:15:04.454074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.684 [2024-12-10 00:15:04.454107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.684 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.454319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.454352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.454513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.454544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.454677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.454709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.454840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.454875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.455126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.455156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.455303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.455335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.455545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.455578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.455710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.455745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.455940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.455973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.456091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.456122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.456388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.456422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.456543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.456576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.456822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.456854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.457029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.457062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.457299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.457338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.457472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.457503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.457643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.457676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.457882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.457914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.458049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.458081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.458348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.458381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.458512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.458544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.458670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.458702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.458988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.459021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.459200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.459233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.459346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.459378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.459569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.459602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.459819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.459850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.460035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.460070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.460269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.460304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.460411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.460445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.460627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.460659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.460889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.460921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.461203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.461236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.461377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.461409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.461590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.461622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.461844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.461876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.462055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.462087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.462228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.462261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.685 [2024-12-10 00:15:04.462413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.685 [2024-12-10 00:15:04.462444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.685 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.462566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.462597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.462737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.462768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.463040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.463072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.463292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.463326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.463445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.463477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.463680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.463712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.463837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.463870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.463994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.464025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.464211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.464245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.464424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.464458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.464574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.464605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.464728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.464760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.464964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.464996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.465124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.465156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.465425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.465457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.465663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.465696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.466001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.466074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.466523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.466598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.468333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.468399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.468571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.468607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.468836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.468869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.469070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.469103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.469333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.469367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.469578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.469611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.469744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.469777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.469967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.469999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.470200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.470234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.470457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.470492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.470644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.470676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.470806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.470849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.471036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.471067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.471263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.471295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.471489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.471521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.471646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.471679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.471890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.471923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.472178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.472211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.472330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.472362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.472562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.472592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.686 [2024-12-10 00:15:04.472771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.686 [2024-12-10 00:15:04.472802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.686 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.472981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.473015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.473215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.473248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.473448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.473481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.473603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.473634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.473838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.473871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.474039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.474071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.474206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.474240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.474435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.474468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.474649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.474681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.474893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.474925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.475180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.475214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.475420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.475585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.475617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.475810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.475843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.476140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.476181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.476315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.476347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.476502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.476537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.476673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.476706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.476909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.476943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.477074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.477105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.477329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.477362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.477501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.477532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.477669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.477701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.477897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.477929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.478075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.478107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.478389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.478423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.478607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.478640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.478769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.478800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.479021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.479053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.479219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.479253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.479459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.479498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.479634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.479667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.479792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.479824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.479938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.479968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.480180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.480215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.480343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.480375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.480508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.480541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.480680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.480712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.687 qpair failed and we were unable to recover it. 00:33:29.687 [2024-12-10 00:15:04.480924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.687 [2024-12-10 00:15:04.480957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.481080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.481111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.481302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.481334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.481464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.481497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.481678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.481712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.481845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.481875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.482063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.482096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.482230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.482263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.482410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.482445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.482559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.482590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.482808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.482842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.482968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.483000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.483130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.483175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.483309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.483341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.483475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.483509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.483646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.483677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.483797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.483829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.484008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.484041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.484276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.484310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.484499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.484577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.484742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.484780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.484896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.484930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.485115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.485148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.485380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.485414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.485542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.485573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.485688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.485719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.485906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.485938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.486134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.486175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.486285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.486318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.486453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.486486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.486609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.486642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.486964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.486997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.487222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.487267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.487403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.487434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.487636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.487667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.688 qpair failed and we were unable to recover it. 00:33:29.688 [2024-12-10 00:15:04.487817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.688 [2024-12-10 00:15:04.487850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.488031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.488062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.488208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.488242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.488425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.488457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.488662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.488693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.488993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.489024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.489306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.489339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.489525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.489557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.489736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.489768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.489909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.489940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.490246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.490279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.490439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.490473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.490589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.490621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.490918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.490951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.491137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.491176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.491372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.491403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.491533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.491564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.491674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.491705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.491828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.491859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.492038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.492069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.492322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.492355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.492544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.492577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.492758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.492790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.492984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.493016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.493237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.493271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.493546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.493577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.493724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.493755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.493876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.493908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.494195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.494229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.494425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.494459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.494579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.494611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.494718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.494750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.494941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.494973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.495152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.495194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.495330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.495361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.495475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.495507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.495614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.689 [2024-12-10 00:15:04.495645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.689 qpair failed and we were unable to recover it. 00:33:29.689 [2024-12-10 00:15:04.495863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.495902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.496030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.496061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.496316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.496349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.496463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.496494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.496629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.496660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.496861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.496892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.497113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.497146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.497364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.497397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.497582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.497614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.497798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.497830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.498008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.498041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.498170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.498203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.498385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.498418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.498632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.498666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.498789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.498820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.498945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.498977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.499100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.499131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.499336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.499368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.499570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.499602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.499715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.499747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.499857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.499890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.500012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.500045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.500269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.500303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.500426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.500458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.500591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.500624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.500746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.500779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.500905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.500937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.501067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.501105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.501232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.501267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.501547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.501578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.501756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.501788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.501964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.501996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.502105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.502136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.502263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.502296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.502507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.502540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.502688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.502720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.502903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.502935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.690 [2024-12-10 00:15:04.503048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.690 [2024-12-10 00:15:04.503081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.690 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.503211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.503244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.503371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.503404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.503523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.503556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.503670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.503703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.503899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.503931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.504041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.504074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.504204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.504239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.504360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.504392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.504574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.504605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.504859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.504891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.505020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.505052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.505191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.505222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.505337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.505371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.505554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.505587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.505773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.505804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.505953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.505985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.506115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.506148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.506275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.506308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.506491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.506522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.506704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.506738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.506928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.506961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.507070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.507102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.507240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.507274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.507390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.507422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.507606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.507639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.507768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.507801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.507930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.507962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.508078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.508110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.508247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.508282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.508409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.508448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.508648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.508682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.508803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.508835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.509015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.509047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.509230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.509264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.509393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.509425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.509615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.509645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.691 [2024-12-10 00:15:04.509757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.691 [2024-12-10 00:15:04.509789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.691 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.510017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.510049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.510171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.510205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.510383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.510415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.510523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.510554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.510674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.510706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.510825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.510857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.510968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.511001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.511138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.511179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.511362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.511395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.511571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.511602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.511808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.511839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.511959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.511990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.512112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.512143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.512275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.512307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.512421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.512451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.512563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.512596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.512707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.512738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.512854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.512886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.513002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.513034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.513256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.513289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.513409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.513439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.513560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.513590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.513702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.513731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.513852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.513882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.514088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.514119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.514265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.514297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.514423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.514454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.514559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.514591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.514712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.514741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.514864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.514893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.515064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.515092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.515198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.515229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.515342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.515375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.515476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.515505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.515617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.515645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.515821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.515849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.515951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.515980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.516093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.516122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.692 qpair failed and we were unable to recover it. 00:33:29.692 [2024-12-10 00:15:04.516239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.692 [2024-12-10 00:15:04.516269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.516390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.516418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.516547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.516576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.516684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.516713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.516824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.516853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.517021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.517050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.517153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.517192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.517291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.517320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.517436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.517465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.517664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.517693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.517800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.517828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.517938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.517966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.518081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.518110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.518254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.518284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.518389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.518419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.518538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.518567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.518742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.518771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.520278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.520331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.520558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.520590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.520698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.520727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.520918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.520947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.521120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.521150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.521279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.521309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.521418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.521446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.521562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.521591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.521781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.521809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.521979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.522008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.522197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.522227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.522339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.522368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.522500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.522529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.522701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.522729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.522849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.522878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.523003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.523032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.523140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.693 [2024-12-10 00:15:04.523197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.693 qpair failed and we were unable to recover it. 00:33:29.693 [2024-12-10 00:15:04.523294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.523329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.523450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.523478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.523669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.523698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.523948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.523976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.524150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.524189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.524360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.524389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.524512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.524550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.524652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.524677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.524800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.524824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.524926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.524950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.525055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.525079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.525242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.525267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.525459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.525484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.525594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.525619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.525806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.525831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.525995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.526019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.526197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.526223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.526388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.526413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.526574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.526599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.526793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.526825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.527024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.527056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.527267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.527299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.527419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.527450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.527578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.527610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.527820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.527850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.528026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.528068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.528264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.528289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.528395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.528418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.528629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.528653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.528835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.528861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.529112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.529138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.529329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.529354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.529510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.529535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.529846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.529872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.530188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.530215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.530453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.530479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.530652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.530677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.530888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.530913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.531019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.694 [2024-12-10 00:15:04.531044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.694 qpair failed and we were unable to recover it. 00:33:29.694 [2024-12-10 00:15:04.531209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.531236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.531341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.531370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.531509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.531534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.531648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.531673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.531933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.531957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.532134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.532215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.532388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.532414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.532597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.532623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.532811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.532836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.532951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.532977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.533069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.533094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.533214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.533241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.533444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.533469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.533581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.533606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.533879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.533903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.534073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.534098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.534208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.534234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.534491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.534517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.534641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.534670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.534805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.534834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.535003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.535032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.535152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.535190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.535391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.535419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.535596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.535625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.535910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.535939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.536208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.536239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.536426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.536455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.536688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.536718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.537056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.537085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.537194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.537224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.537441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.537470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.537732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.537760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.537941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.537970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.538241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.538272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.538514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.538543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.538717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.538746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.538933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.538961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.539167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.539196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.695 [2024-12-10 00:15:04.539398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.695 [2024-12-10 00:15:04.539427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.695 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.539582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.539610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.539803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.539831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.540021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.540055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.540310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.540341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.540459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.540488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.540754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.540782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.540884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.540913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.541154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.541195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.541369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.541397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.541601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.541630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.541818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.541848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.542015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.542045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.542237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.542267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.542470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.542498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.542679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.542707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.542970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.542998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.543117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.543145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.543325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.543354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.543480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.543508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.543679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.543708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.543976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.544004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.544198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.544228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.544409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.544436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.544722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.544754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.544934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.544965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.545105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.545135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.545312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.545345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.545550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.545580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.545785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.545816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.546019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.546051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.546230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.546263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.546516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.546548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.546745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.546776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.546891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.546921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.547098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.547128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.547321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.547353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.547479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.696 [2024-12-10 00:15:04.547509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.696 qpair failed and we were unable to recover it. 00:33:29.696 [2024-12-10 00:15:04.547638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.547669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.547852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.547884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.548190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.548223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.548536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.548568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.548894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.548926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.549200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.549238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.549432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.549464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.549573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.549605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.549909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.549940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.550172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.550205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.550481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.550513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.550722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.550753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.550955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.550987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.551112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.551144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.551457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.551492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.551626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.551658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.551856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.551889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.552086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.552117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.552353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.552386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.552576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.552608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.552890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.552921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.553119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.553154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.553370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.553403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.553580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.553611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.553929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.553961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.554250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.554282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.554482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.554513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.697 qpair failed and we were unable to recover it. 00:33:29.697 [2024-12-10 00:15:04.554770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.697 [2024-12-10 00:15:04.554803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-12-10 00:15:04.554936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-12-10 00:15:04.554969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-12-10 00:15:04.555174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-12-10 00:15:04.555207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-12-10 00:15:04.555325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.698 [2024-12-10 00:15:04.555358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.698 qpair failed and we were unable to recover it. 00:33:29.698 [2024-12-10 00:15:04.555568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.987 [2024-12-10 00:15:04.555601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.987 qpair failed and we were unable to recover it. 00:33:29.987 [2024-12-10 00:15:04.555930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.987 [2024-12-10 00:15:04.555966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.987 qpair failed and we were unable to recover it. 00:33:29.987 [2024-12-10 00:15:04.556136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.987 [2024-12-10 00:15:04.556178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.987 qpair failed and we were unable to recover it. 00:33:29.987 [2024-12-10 00:15:04.556430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.987 [2024-12-10 00:15:04.556463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.987 qpair failed and we were unable to recover it. 00:33:29.987 [2024-12-10 00:15:04.556569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.987 [2024-12-10 00:15:04.556600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.987 qpair failed and we were unable to recover it. 00:33:29.987 [2024-12-10 00:15:04.556819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.987 [2024-12-10 00:15:04.556852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.987 qpair failed and we were unable to recover it. 00:33:29.987 [2024-12-10 00:15:04.557098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.987 [2024-12-10 00:15:04.557131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.987 qpair failed and we were unable to recover it. 00:33:29.987 [2024-12-10 00:15:04.557334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.987 [2024-12-10 00:15:04.557367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.987 qpair failed and we were unable to recover it. 00:33:29.987 [2024-12-10 00:15:04.557521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.987 [2024-12-10 00:15:04.557553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.987 qpair failed and we were unable to recover it. 00:33:29.987 [2024-12-10 00:15:04.557728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.557760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.557934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.557967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.558082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.558116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.558324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.558358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.558490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.558523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.558733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.558770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.558968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.558999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.559114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.559145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.559270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.559303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.559428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.559460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.559594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.559627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.559856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.559888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.560088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.560118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.560277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.560326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.560507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.560541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.560676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.560708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.560961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.560994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.561199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.561232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.561429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.561461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.561606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.561639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.561891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.561922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.562104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.562135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.562267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.988 [2024-12-10 00:15:04.562299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.988 qpair failed and we were unable to recover it. 00:33:29.988 [2024-12-10 00:15:04.562542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.562573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.562717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.562749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.562878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.562910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.563114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.563146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.563348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.563381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.563633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.563665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.563879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.563911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.564105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.564136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.564329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.564361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.564499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.564531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.564710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.564742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.564922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.564952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.565144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.565189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.565369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.565401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.565510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.565542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.565802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.989 [2024-12-10 00:15:04.565833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.989 qpair failed and we were unable to recover it. 00:33:29.989 [2024-12-10 00:15:04.566008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.566039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.566216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.566249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.566477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.566507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.566632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.566664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.566774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.566805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.567002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.567035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.567244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.567282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.567506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.567538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.567665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.567696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.568010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.568043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.568257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.568288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.568489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.568521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.568717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.568748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.568884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.568915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.569112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.569143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.569269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.569302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.569433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.569466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.569644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.569676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.569946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.569976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.570182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.570215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.570499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.570530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.570733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.570764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.570963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.570994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.571208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.571241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.571421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.571452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.571667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.571699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.571819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.571852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.572031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.572063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.572213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.572246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.572445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.572478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.572676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.572707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.572833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.990 [2024-12-10 00:15:04.572865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.990 qpair failed and we were unable to recover it. 00:33:29.990 [2024-12-10 00:15:04.572984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.573016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.573250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.573329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.573589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.573626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.573759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.573793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.573927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.573959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.574137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.574176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.574321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.574353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.574541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.574574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.574792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.574823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.575022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.575054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.575181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.575215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.575346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.575377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.575577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.575609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.575876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.575908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.576035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.576077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.576191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.576222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.576420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.576451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.576631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.576663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.576888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.576920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.577045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.577078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.577233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.577265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.577491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.577522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.577631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.577661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.577873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.577905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.578084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.578116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.578256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.578289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.578409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.991 [2024-12-10 00:15:04.578441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.991 qpair failed and we were unable to recover it. 00:33:29.991 [2024-12-10 00:15:04.578569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.578599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.578753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.578785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.578920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.578952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.579156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.579198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.579400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.579433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.579580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.579611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.579809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.579840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.579963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.579995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.580144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.580188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.580388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.580420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.580535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.580567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.580742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.580773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.580951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.580983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.581178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.581214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.581341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.581373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.581555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.581587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.581768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.581800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.582054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.582085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.582275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.582310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.582517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.582548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.582671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.582702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.582917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.582948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.583150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.583194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.583381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.583413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.583520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.583551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.583730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.583761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.583883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.583914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.584095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.584133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.584398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.584431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.584610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.584641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.584773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.584807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.585079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.585111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.992 [2024-12-10 00:15:04.585273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.992 [2024-12-10 00:15:04.585305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.992 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.585484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.585516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.585736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.585768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.585896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.585927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.586139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.586182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.586484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.586516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.586656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.586687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.587006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.587038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.587223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.587256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.587408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.587441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.587645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.587677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.587801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.587833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.588030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.588061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.588242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.588275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.588414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.588446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.588576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.588607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.588805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.588836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.588945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.588977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.589164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.589199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.589327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.589359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.589566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.589600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.589723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.589755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.589879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.589912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.590096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.590128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.590335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.590369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.590502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.590534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.590854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.590886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.591091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.591124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.993 [2024-12-10 00:15:04.591278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.993 [2024-12-10 00:15:04.591312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.993 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.591589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.591622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.591755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.591788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.591965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.591996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.592290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.592323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.592458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.592490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.592688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.592720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.592840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.592878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.593180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.593213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.593348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.593380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.593574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.593606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.593740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.593771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.594032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.594064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.594278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.594312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.594445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.594477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.594626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.594658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.594868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.594900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.595110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.595142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.595372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.595405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.994 qpair failed and we were unable to recover it. 00:33:29.994 [2024-12-10 00:15:04.595613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.994 [2024-12-10 00:15:04.595645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.595847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.595880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.596065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.596097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.596308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.596342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.596467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.596498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.596777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.596809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.596938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.596969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.597241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.597274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.597451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.597484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.597687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.597719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.597842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.597875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.598083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.598115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.598245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.598278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.598391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.598422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.598609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.598641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.598870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.598902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.599078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.599110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.599323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.599356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.599551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.599584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.599716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.599748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.599957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.599989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.600267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.600299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.600575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.600607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.600804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.600837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.601110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.601142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.601443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.601475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.601739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.601771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.601892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.995 [2024-12-10 00:15:04.601924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.995 qpair failed and we were unable to recover it. 00:33:29.995 [2024-12-10 00:15:04.602205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.602246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.602427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.602459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.602655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.602688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.602931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.602962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.603212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.603245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.603355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.603387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.603499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.603531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.603810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.603842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.604098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.604131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.604415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.604447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.604647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.604679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.604797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.604828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.605007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.605040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.605191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.605224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.605365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.605397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.605516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.605548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.605809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.605841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.606051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.606082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.606286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.606319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.606526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.606557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.606683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.606714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.606836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.606867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.606985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.607016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.607203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.607236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.607439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.607471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.607601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.607634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.607760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.607792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.608021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.996 [2024-12-10 00:15:04.608053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.996 qpair failed and we were unable to recover it. 00:33:29.996 [2024-12-10 00:15:04.608316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.608349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.608489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.608521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.608716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.608747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.609021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.609053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.609194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.609227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.609367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.609399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.609676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.609708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.609843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.609874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.610107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.610139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.610362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.610395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.610690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.610721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.610922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.610954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.611207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.611241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.611464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.611496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.611747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.611779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.611998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.612031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.612209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.612242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.612419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.612451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.612742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.612773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.612955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.612987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.613114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.613145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.613289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.613322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.613518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.613549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.613680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.613712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.614035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.614068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.614246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.614279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.614464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.614497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.997 [2024-12-10 00:15:04.614729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.997 [2024-12-10 00:15:04.614760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.997 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.614958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.614989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.615110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.615141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.615356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.615389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.615539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.615571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.615780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.615813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.616099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.616131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.616335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.616368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.616550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.616582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.616790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.616822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.616954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.616986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.617188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.617221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.617400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.617443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.617555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.617587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.617713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.617745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.617924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.617957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.618202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.618236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.618346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.618378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.618559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.618590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.618702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.618733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.618842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.618873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.619178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.619212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.619340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.619372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.619625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.619657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.619791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.619822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.620002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.620034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.620224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.620258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.620373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.620405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.620585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.620617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.620873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.620905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.621083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.998 [2024-12-10 00:15:04.621114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.998 qpair failed and we were unable to recover it. 00:33:29.998 [2024-12-10 00:15:04.621302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.621336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.621455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.621487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.621736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.621768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.621948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.621980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.622183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.622216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.622471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.622503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.622630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.622662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.622904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.622937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.623063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.623095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.623305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.623338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.623461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.623494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.623674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.623708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.623903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.623935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.624129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.624170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.624291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.624324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.624543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.624575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.624793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.624825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.624952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.624984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.625234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.625268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.625392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.625424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.625606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.625638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.625877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.625916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.626121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.626153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.626346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.626379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.626630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.626661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.626782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.626814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.627002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.627033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.627203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:29.999 [2024-12-10 00:15:04.627237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:29.999 qpair failed and we were unable to recover it. 00:33:29.999 [2024-12-10 00:15:04.627369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.627400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.627582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.627614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.627726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.627757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.627946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.627978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.628092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.628124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.628264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.628297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.628435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.628466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.628707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.628740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.628860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.628891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.629085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.629117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.629264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.629296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.629419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.629450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.629631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.629663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.629774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.629805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.630054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.630086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.630339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.630372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.000 qpair failed and we were unable to recover it. 00:33:30.000 [2024-12-10 00:15:04.630628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.000 [2024-12-10 00:15:04.630660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.630940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.630971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.631224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.631256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.631460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.631491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.631625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.631657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.631877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.631910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.632151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.632197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.632381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.632412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.632594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.632628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.632781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.632812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.633131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.633171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.633319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.633352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.633531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.633562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.633690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.633722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.633934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.633966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.634170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.634203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.634338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.634370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.634522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.634559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.634777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.634809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.634931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.634962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.635183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.635216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.635442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.635474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.635728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.635760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.635884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.635917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.636038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.636070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.636326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.636360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.636519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.636552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.636760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.636792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.636914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.636946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.637072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.001 [2024-12-10 00:15:04.637105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.001 qpair failed and we were unable to recover it. 00:33:30.001 [2024-12-10 00:15:04.637276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.637309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.637497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.637529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.637722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.637754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.637878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.637910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.638120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.638152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.638362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.638395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.638604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.638636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.638766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.638798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.639026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.639059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.639239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.639272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.639416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.639449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.639700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.639732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.639934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.639967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.640182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.640217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.640430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.640462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.640654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.640686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.640820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.640853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.641043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.641074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.641265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.641298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.641433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.641466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.641655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.641687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.641882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.641915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.642093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.642124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.642350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.642383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.642577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.642609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.642729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.642762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.643034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.643066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.643194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.643231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.643446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.643478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.643600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.643629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.643767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.643798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.002 [2024-12-10 00:15:04.643976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.002 [2024-12-10 00:15:04.644008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.002 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.644267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.644300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.644483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.644516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.644712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.644744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.644878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.644909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.645049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.645081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.645301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.645333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.645522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.645555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.645679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.645711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.645840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.645872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.645996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.646027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.646237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.646274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.646479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.646513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.646639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.646671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.646784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.646817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.647101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.647134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.647286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.647318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.647498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.647530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.647742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.647775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.648049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.648081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.648304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.648338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.648489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.648520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.648629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.648661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.648782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.648815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.649021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.649052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.649174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.649207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.649346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.649379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.649619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.649651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.649963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.649995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.650202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.003 [2024-12-10 00:15:04.650236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.003 qpair failed and we were unable to recover it. 00:33:30.003 [2024-12-10 00:15:04.650419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.650452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.650581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.650613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.650820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.650854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.650962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.650994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.651280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.651314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.651442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.651474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.651677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.651714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.651939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.651971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.652097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.652129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.652355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.652388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.652575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.652608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.652808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.652841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.653019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.653050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.653244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.653276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.653501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.653534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.653663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.653695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.653820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.653852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.653971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.654003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.654255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.654289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.654419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.654453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.654696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.654729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.654842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.654873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.655051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.655083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.655269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.655302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.655431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.655464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.655594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.655627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.655744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.655775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.656033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.656066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.656297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.656331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.656589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.656621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.656852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.656884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.657065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.657097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.004 [2024-12-10 00:15:04.657293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.004 [2024-12-10 00:15:04.657327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.004 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.657539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.657571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.657690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.657722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.657931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.657962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.658082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.658114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.658403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.658437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.658574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.658607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.658736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.658768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.659032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.659064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.659248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.659282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.659416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.659448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.659571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.659603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.659804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.659837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.660076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.660108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.660244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.660288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.660419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.660451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.660634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.660667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.660876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.660909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.661111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.661143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.661357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.661390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.661592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.661624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.661752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.661784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.661959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.661990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.662249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.662284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.662468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.662503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.662634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.662666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.662784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.662816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.662992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.663025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.663263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.663296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.663443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.663474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.005 [2024-12-10 00:15:04.663689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.005 [2024-12-10 00:15:04.663720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.005 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.663846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.663878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.664059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.664092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.664349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.664383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.664588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.664620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.664817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.664849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.665033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.665064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.665176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.665210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.665326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.665359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.665573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.665605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.665801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.665833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.666036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.666071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.666272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.666306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.666440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.666472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.666689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.666721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.666922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.666953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.667218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.667252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.667364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.667396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.667646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.667677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.667799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.667835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.668054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.668085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.668264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.668297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.668449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.668481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.668688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.668720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.006 qpair failed and we were unable to recover it. 00:33:30.006 [2024-12-10 00:15:04.668907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.006 [2024-12-10 00:15:04.668946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.669176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.669210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.669358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.669391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.669524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.669555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.669677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.669709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.669887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.669919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.670095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.670127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.670338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.670372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.670553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.670586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.670697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.670729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.670911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.670943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.671122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.671155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.671415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.671446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.671575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.671608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.671734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.671766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.671893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.671925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.672123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.672156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.672303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.672335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.672473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.672505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.672635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.672667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.672859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.672890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.673198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.673231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.673391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.673424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.673700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.673731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.673853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.673885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.674071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.674103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.674348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.674382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.674665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.674698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.674903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.674936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.675196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.675230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.675440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.675471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.675603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.007 [2024-12-10 00:15:04.675635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.007 qpair failed and we were unable to recover it. 00:33:30.007 [2024-12-10 00:15:04.675764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.675796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.675974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.676007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.676185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.676219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.676404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.676437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.676558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.676591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.676775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.676807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.676985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.677018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.677200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.677233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.677410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.677447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.677723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.677755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.678033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.678064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.678282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.678314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.678607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.678639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.678764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.678796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.678972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.679004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.679220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.679253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.679452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.679484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.679629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.679662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.679795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.679827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.680014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.680047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.680246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.680280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.680505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.680537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.680651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.680683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.680816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.680848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.681048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.681080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.681205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.681238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.681586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.681619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.681821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.681852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.682032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.682065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.682248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.682281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.008 [2024-12-10 00:15:04.682439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.008 [2024-12-10 00:15:04.682472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.008 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.682664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.682695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.682897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.682929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.683129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.683170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.683400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.683432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.683697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.683777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.684012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.684048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.684239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.684275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.684483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.684517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.684868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.684900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.685178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.685212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.685445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.685477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.685607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.685638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.685838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.685870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.686059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.686090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.686365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.686399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.686580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.686612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.686735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.686767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.686889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.686921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.687062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.687094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.687214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.687246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.687496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.687527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.687814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.687846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.688028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.688060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.688337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.688372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.688494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.688525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.688645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.688675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.688982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.689014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.689210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.689242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.689432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.689464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.689735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.689766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.689945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.009 [2024-12-10 00:15:04.689977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.009 qpair failed and we were unable to recover it. 00:33:30.009 [2024-12-10 00:15:04.690183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.690222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.690412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.690444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.690624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.690657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.690841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.690873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.691147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.691189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.691420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.691453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.691567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.691598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.691804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.691834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.692011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.692043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.692298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.692330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.692512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.692545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.692746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.692778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.693026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.693057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.693242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.693275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.693439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.693470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.693648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.693679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.693881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.693912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.694088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.694119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.694315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.694349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.694541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.694574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.694798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.694830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.695008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.695039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.695173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.695206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.695362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.695392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.695517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.695548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.695734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.695766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.695943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.695973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.696089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.696126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.696313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.696346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.696546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.696578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.696847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.696879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.697079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.010 [2024-12-10 00:15:04.697110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.010 qpair failed and we were unable to recover it. 00:33:30.010 [2024-12-10 00:15:04.697586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.697621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.697908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.697939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.698134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.698177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.698455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.698486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.698796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.698828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.699016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.699048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.699224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.699257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.699387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.699417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.699597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.699628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.699820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.699853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.700031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.700062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.700259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.700291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.700546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.700577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.700685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.700716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.700829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.700860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.700990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.701022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.701167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.701201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.701334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.701364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.701573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.701605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.701840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.701872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.702073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.702104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.702332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.702365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.702618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.702657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.702853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.702885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.703015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.703046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.703254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.703288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.703409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.703440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.703692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.703723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.703906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.703938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.011 qpair failed and we were unable to recover it. 00:33:30.011 [2024-12-10 00:15:04.704136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.011 [2024-12-10 00:15:04.704175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.704285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.704317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.704513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.704544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.704674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.704706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.705036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.705069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.705253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.705285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.705451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.705483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.705698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.705729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.705956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.705988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.706206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.706239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.706490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.706521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.706721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.706752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.706870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.706902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.707137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.707180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.707305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.707337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.707466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.707497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.707770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.707803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.708058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.708089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.708220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.708253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.708441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.708473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.708668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.708699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.708831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.708862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.709169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.709204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.709397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.709428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.709550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.709582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.709772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.709804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.012 [2024-12-10 00:15:04.710026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.012 [2024-12-10 00:15:04.710058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.012 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.710318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.710352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.710478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.710509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.710636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.710667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.710879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.710910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.711189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.711222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.711353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.711385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.711585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.711615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.711889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.711966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.712203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.712240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.712516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.712549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.712731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.712763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.712876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.712907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.713060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.713092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.713240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.713272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.713456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.713489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.713711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.713743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.713904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.713935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.714136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.714179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.714364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.714397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.714521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.714553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.714681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.714724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.714931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.714963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.715170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.715204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.715334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.715367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.715565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.715596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.715806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.715838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.716021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.716053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.716244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.716278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.716477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.716510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.716690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.716722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.013 [2024-12-10 00:15:04.716899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.013 [2024-12-10 00:15:04.716930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.013 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.717165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.717199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.717424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.717457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.717596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.717628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.717915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.717948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.718098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.718130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.718360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.718435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.718698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.718737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.718933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.718964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.719170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.719204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.719332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.719362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.719554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.719587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.719718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.719749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.720063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.720094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.720427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.720459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.720589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.720620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.720826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.720857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.721064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.721103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.721281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.721315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.721446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.721475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.721602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.721631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.721856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.721888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.722082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.722114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.722339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.722372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.722496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.722527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.722650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.722680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.014 qpair failed and we were unable to recover it. 00:33:30.014 [2024-12-10 00:15:04.722812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.014 [2024-12-10 00:15:04.722844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.722965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.722997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.723183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.723217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.723523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.723555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.723713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.723744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.723964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.723996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.724125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.724155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.724367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.724400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.724560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.724592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.724769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.724801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.724926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.724958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.725147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.725189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.725308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.725340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.725452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.725483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.725637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.725669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.725913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.725944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.726165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.726198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.726397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.726429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.726612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.726649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.726840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.726871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.727145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.727189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.727370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.727402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.727529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.727561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.727781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.727813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.727940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.727971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.728115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.015 [2024-12-10 00:15:04.728146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.015 qpair failed and we were unable to recover it. 00:33:30.015 [2024-12-10 00:15:04.728415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.728447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.728585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.728617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.728917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.728948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.729062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.729093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.729339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.729374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.729529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.729560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.729700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.729732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.729970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.730001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.730303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.730336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.730537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.730568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.730779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.730810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.730989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.731020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.731217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.731249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.731450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.731483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.731621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.731652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.731814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.731845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.732027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.732058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.732182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.732216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.732340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.732371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.732572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.732603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.732742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.732774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.732891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.732921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.733101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.733133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.733340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.016 [2024-12-10 00:15:04.733372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.016 qpair failed and we were unable to recover it. 00:33:30.016 [2024-12-10 00:15:04.733523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.733553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.733735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.733767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.734043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.734074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.734298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.734331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.734570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.734601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.734804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.734835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.734961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.734992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.735175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.735207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.735356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.735388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.735663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.735740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.736039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.736075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.736294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.736333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.736587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.736620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.736840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.736873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.737072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.737104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.737302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.737335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.737513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.737546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.737673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.737706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.737931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.737962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.738146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.738190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.738404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.738436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.738572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.738604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.738806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.738848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.739056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.739087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.739310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.739343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.739476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.739508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.739639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.739670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.739988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.740021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.740144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.740198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.740359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.740390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.740512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.740544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.017 [2024-12-10 00:15:04.740744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.017 [2024-12-10 00:15:04.740775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.017 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.741007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.741038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.741224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.741258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.741462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.741494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.741760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.741791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.742006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.742040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.742244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.742278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.742547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.742579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.742708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.742740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.743017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.743049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.743230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.743262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.743396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.743427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.743534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.743565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.743773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.743805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.744028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.744061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.744288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.744325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.744553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.744584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.744730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.744762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.744973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.745007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.745312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.745346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.745541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.018 [2024-12-10 00:15:04.745573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.018 qpair failed and we were unable to recover it. 00:33:30.018 [2024-12-10 00:15:04.745768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.745801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.745978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.746011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.746130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.746170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.746352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.746384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.746562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.746594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.746791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.746823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.747043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.747074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.747264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.747298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.747481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.747515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.747707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.747737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.747936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.747967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.748210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.748245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.748386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.748418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.748617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.748648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.748827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.748859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.019 [2024-12-10 00:15:04.749153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.019 [2024-12-10 00:15:04.749211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.019 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.749441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.749472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.749606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.749637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.749831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.749862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.749986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.750018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.750132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.750174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.750355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.750386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.750661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.750693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.751011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.751041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.751288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.751324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.751461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.751493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.751620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.751652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.751827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.751860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.751980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.752011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.752130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.752169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.752371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.752403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.752516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.752548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.752669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.752702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.752916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.752948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.753203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.753237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.753359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.753390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.753570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.753602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.753848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.753888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.754068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.754099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.754309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.020 [2024-12-10 00:15:04.754342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.020 qpair failed and we were unable to recover it. 00:33:30.020 [2024-12-10 00:15:04.754480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.754513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.754651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.754683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.754875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.754907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.755089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.755121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.755319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.755354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.755474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.755506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.755687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.755719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.755989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.756022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.756288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.756323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.756516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.756548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.756729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.756760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.756946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.756978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.757100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.757131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.757274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.757306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.757426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.757458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.757595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.021 [2024-12-10 00:15:04.757626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.021 qpair failed and we were unable to recover it. 00:33:30.021 [2024-12-10 00:15:04.757850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.757882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.758057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.758090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.758228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.758262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.758498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.758529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.758721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.758754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.759095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.759128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.759349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.759385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.759587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.759619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.759822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.759855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.759978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.760010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.760191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.760233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.760412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.760444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.760636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.760668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.760852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.760883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.761009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.022 [2024-12-10 00:15:04.761042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.022 qpair failed and we were unable to recover it. 00:33:30.022 [2024-12-10 00:15:04.761188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-12-10 00:15:04.761221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-12-10 00:15:04.761422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-12-10 00:15:04.761455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-12-10 00:15:04.761568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-12-10 00:15:04.761599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-12-10 00:15:04.761716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-12-10 00:15:04.761748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-12-10 00:15:04.761871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-12-10 00:15:04.761903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-12-10 00:15:04.762035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-12-10 00:15:04.762067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-12-10 00:15:04.762293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-12-10 00:15:04.762332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-12-10 00:15:04.762513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-12-10 00:15:04.762545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-12-10 00:15:04.762689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-12-10 00:15:04.762721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-12-10 00:15:04.762844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-12-10 00:15:04.762876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.023 [2024-12-10 00:15:04.763059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.023 [2024-12-10 00:15:04.763090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.023 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.763261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.763294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.763424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.763455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.763655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.763688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.763893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.763925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.764049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.764080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.764223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.764259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.764535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.764567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.764747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.764780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.764960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.764992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.765204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.765239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.765420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.765452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.024 [2024-12-10 00:15:04.765579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.024 [2024-12-10 00:15:04.765612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.024 qpair failed and we were unable to recover it. 00:33:30.025 [2024-12-10 00:15:04.765814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-12-10 00:15:04.765846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-12-10 00:15:04.765974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-12-10 00:15:04.766005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-12-10 00:15:04.766216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-12-10 00:15:04.766250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-12-10 00:15:04.766458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-12-10 00:15:04.766490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-12-10 00:15:04.766672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-12-10 00:15:04.766705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-12-10 00:15:04.766991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-12-10 00:15:04.767022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-12-10 00:15:04.767247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-12-10 00:15:04.767282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-12-10 00:15:04.767419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-12-10 00:15:04.767452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-12-10 00:15:04.767654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-12-10 00:15:04.767686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-12-10 00:15:04.767879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-12-10 00:15:04.767911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.025 [2024-12-10 00:15:04.768125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.025 [2024-12-10 00:15:04.768184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.025 qpair failed and we were unable to recover it. 00:33:30.026 [2024-12-10 00:15:04.768412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-12-10 00:15:04.768444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-12-10 00:15:04.768639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-12-10 00:15:04.768671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-12-10 00:15:04.768894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-12-10 00:15:04.768926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-12-10 00:15:04.769046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-12-10 00:15:04.769077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-12-10 00:15:04.769295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-12-10 00:15:04.769329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-12-10 00:15:04.769506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-12-10 00:15:04.769538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-12-10 00:15:04.769716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-12-10 00:15:04.769748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-12-10 00:15:04.770034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-12-10 00:15:04.770066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-12-10 00:15:04.770275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-12-10 00:15:04.770308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-12-10 00:15:04.770516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.026 [2024-12-10 00:15:04.770549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.026 qpair failed and we were unable to recover it. 00:33:30.026 [2024-12-10 00:15:04.770732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.770764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.770943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.770976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.771189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.771236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.771352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.771385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.771577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.771609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.771803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.771834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.771955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.771987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.772214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.772247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.772426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.772459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.772738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.772770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.772953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.772985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.773277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.773310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.773441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.773472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.027 [2024-12-10 00:15:04.773687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.027 [2024-12-10 00:15:04.773719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.027 qpair failed and we were unable to recover it. 00:33:30.028 [2024-12-10 00:15:04.773851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-12-10 00:15:04.773882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-12-10 00:15:04.774153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-12-10 00:15:04.774194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-12-10 00:15:04.774380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-12-10 00:15:04.774411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-12-10 00:15:04.774662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-12-10 00:15:04.774694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-12-10 00:15:04.774869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-12-10 00:15:04.774900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-12-10 00:15:04.775095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-12-10 00:15:04.775126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-12-10 00:15:04.775292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-12-10 00:15:04.775327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-12-10 00:15:04.775439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-12-10 00:15:04.775471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-12-10 00:15:04.775751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-12-10 00:15:04.775782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-12-10 00:15:04.775958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.028 [2024-12-10 00:15:04.775989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.028 qpair failed and we were unable to recover it. 00:33:30.028 [2024-12-10 00:15:04.776194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.776228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-12-10 00:15:04.776407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.776438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-12-10 00:15:04.776612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.776644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-12-10 00:15:04.776966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.776999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-12-10 00:15:04.777196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.777229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-12-10 00:15:04.777366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.777398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-12-10 00:15:04.777550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.777582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-12-10 00:15:04.777698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.777729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-12-10 00:15:04.777860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.777892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-12-10 00:15:04.778114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.778145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-12-10 00:15:04.778361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.778392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.029 qpair failed and we were unable to recover it. 00:33:30.029 [2024-12-10 00:15:04.778515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.029 [2024-12-10 00:15:04.778545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-12-10 00:15:04.778749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-12-10 00:15:04.778781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-12-10 00:15:04.778983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-12-10 00:15:04.779014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-12-10 00:15:04.779307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.030 [2024-12-10 00:15:04.779343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.030 qpair failed and we were unable to recover it. 00:33:30.030 [2024-12-10 00:15:04.779554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.779590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.779719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.779753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.779873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.779906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.780190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.780239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.780421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.780455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.780611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.780644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.780868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.780903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.781167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.781201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.781407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.781441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.781573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.781608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.781738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.781772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.781956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.781991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.031 [2024-12-10 00:15:04.782183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.031 [2024-12-10 00:15:04.782223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.031 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.782519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.782555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.782680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.782714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.782898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.782931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.783110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.783144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.783306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.783341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.783468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.783502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.783704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.783737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.783938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.783973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.784175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.784210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.784466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.784500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.784633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.784666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.784873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.784907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.785095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.785129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.785422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.785456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.785682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.785716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.785925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.032 [2024-12-10 00:15:04.785958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.032 qpair failed and we were unable to recover it. 00:33:30.032 [2024-12-10 00:15:04.786173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.786221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.786505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.786540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.786742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.786775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.787053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.787086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.787281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.787317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.787445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.787479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.787659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.787694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.787946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.787980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.788168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.788202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.788485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.788519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.788815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.788849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.789050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.789083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.789267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.789302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.789500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.789535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.789663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.789703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.789819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.789853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.790001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.790034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.033 [2024-12-10 00:15:04.790172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.033 [2024-12-10 00:15:04.790219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.033 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.790422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.790457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.790662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.790697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.790899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.790933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.791172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.791216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.791474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.791509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.791668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.791702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.792001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.792037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.792239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.792277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.792462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.792497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.792748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.792783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.792999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.793033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.793310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.793346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.793470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.793503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.793707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.793740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.793920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.793954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.034 qpair failed and we were unable to recover it. 00:33:30.034 [2024-12-10 00:15:04.794168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.034 [2024-12-10 00:15:04.794204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.794386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.794420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.794622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.794655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.794864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.794898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.795180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.795226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.795360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.795394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.795598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.795632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.795761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.795797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.795912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.795947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.796057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.796091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.796299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.796334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.796483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.796517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.796719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.796753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.796948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.796981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.797191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.797227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.035 qpair failed and we were unable to recover it. 00:33:30.035 [2024-12-10 00:15:04.797430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.035 [2024-12-10 00:15:04.797464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.797601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.797636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.797926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.797960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.798241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.798275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.798408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.798442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.798716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.798750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.798878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.798918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.799111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.799145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.799438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.799475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.799775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.799811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.800113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.800147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.800379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.800414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.800613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.800648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.800850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.800886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.800998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.801032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.801234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.801270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.801557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.036 [2024-12-10 00:15:04.801592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.036 qpair failed and we were unable to recover it. 00:33:30.036 [2024-12-10 00:15:04.801777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.801812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.802077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.802112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.802232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.802264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.802479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.802513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.802643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.802679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.802876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.802911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.803051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.803085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.803295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.803333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.803544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.803580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.803715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.803749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.803930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.803966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.804220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.804255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.804436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.804472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.804668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.804703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.804817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.804849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.805070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.805106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.805312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.805346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.805456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.805492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.805705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.805740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.805944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.037 [2024-12-10 00:15:04.805980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.037 qpair failed and we were unable to recover it. 00:33:30.037 [2024-12-10 00:15:04.806181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.806217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.806345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.806378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.806576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.806611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.806881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.806916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.807030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.807066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.807187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.807232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.807420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.807454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.807680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.807715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.807909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.807943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.808212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.808253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.808436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.808470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.808722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.808757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.808881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.808916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.809191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.809226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.809406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.809440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.809619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.809654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.809934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.809968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.810181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.810217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.810478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.810512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.810695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.810729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.038 [2024-12-10 00:15:04.810935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.038 [2024-12-10 00:15:04.810969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.038 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.811186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.811229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.811484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.811518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.811781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.811816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.811999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.812033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.812175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.812211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.812398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.812434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.812625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.812659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.812838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.812873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.813128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.813174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.813296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.813330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.813508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.813543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.813726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.813759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.039 qpair failed and we were unable to recover it. 00:33:30.039 [2024-12-10 00:15:04.814044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.039 [2024-12-10 00:15:04.814079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.814262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.814299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.814550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.814584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.814830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.814911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.815141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.815198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.815393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.815430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.815711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.815746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.815943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.815978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.816093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.816126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.816336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.816378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.816674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.816708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.816903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.816937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.817043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.817079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.817260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.817296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.817479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.817513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.817648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.817682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.817908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.817949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.818073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.818106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.040 [2024-12-10 00:15:04.818241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.040 [2024-12-10 00:15:04.818277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.040 qpair failed and we were unable to recover it. 00:33:30.041 [2024-12-10 00:15:04.818473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-12-10 00:15:04.818509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-12-10 00:15:04.818706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-12-10 00:15:04.818741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.041 [2024-12-10 00:15:04.818961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.041 [2024-12-10 00:15:04.818996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.041 qpair failed and we were unable to recover it. 00:33:30.044 [2024-12-10 00:15:04.819200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-12-10 00:15:04.819240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-12-10 00:15:04.819367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-12-10 00:15:04.819401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-12-10 00:15:04.819554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.044 [2024-12-10 00:15:04.819590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.044 qpair failed and we were unable to recover it. 00:33:30.044 [2024-12-10 00:15:04.819774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.819808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.819942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.819977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.820108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.820143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.820284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.820320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.820538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.820572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.820695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.820730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.820909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.820943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.821176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.821213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.821334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.821365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.821544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.821579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.821765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.821800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.821930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.821962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.822075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.822108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.822242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.822277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.822472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.822507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.822622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.822657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.822838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.822872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.822984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.823016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.823217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.823298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.823443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.823483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.823709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.823745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.823946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.823981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.824108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.824143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.824285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.824321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.824503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.824538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.824672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.824707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.824907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.824943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.825195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.045 [2024-12-10 00:15:04.825231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.045 qpair failed and we were unable to recover it. 00:33:30.045 [2024-12-10 00:15:04.825445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.825480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.825660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.825696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.825917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.825952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.826074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.826120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.826243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.826276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.826457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.826492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.826674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.826708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.826834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.826869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.826978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.827012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.827124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.827168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.827354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.827389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.827578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.827612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.827807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.827844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.827974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.828007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.828133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.828179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.828306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.828339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.828541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.828577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.828709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.828743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.828947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.828981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.829235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.829272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.829410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.829446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.829570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.829603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.829817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.829852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.829965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.829996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.830107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.830141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.830277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.830313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.830423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.830457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.830735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.830769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.831053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.831087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.831199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.831230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.831450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.831492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.831674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.831708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.831891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.831924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.832046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.832080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.832205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.046 [2024-12-10 00:15:04.832239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.046 qpair failed and we were unable to recover it. 00:33:30.046 [2024-12-10 00:15:04.832356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.832390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.832631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.832665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.832921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.832955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.833073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.833107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.833242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.833277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.833457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.833491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.833669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.833703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.833919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.833953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.834079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.834124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.834320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.834355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.834494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.834528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.834671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.834705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.834885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.834919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.835101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.835136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.835348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.835384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.835508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.835541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.835739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.835773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.835952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.835986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.836114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.836148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.836454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.836488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.836667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.836701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.836879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.836914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.837122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.837156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.837457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.837492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.837671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.837704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.837956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.837991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.838119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.838153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.838278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.838324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.838508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.838542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.838728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.838761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.838937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.838971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.839102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.839135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.839293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.839329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.839447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.839481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.839683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.839715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.839850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.839889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.840085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.840119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.840339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.840374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.840558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.840591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.840701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.840734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.840844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.840878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.840996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.841028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.841305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.841340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.841530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.841563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.841686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.841719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.841838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.841871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.841994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.842028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.842154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.842196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.842373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.842413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.842614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.842647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.047 qpair failed and we were unable to recover it. 00:33:30.047 [2024-12-10 00:15:04.842823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.047 [2024-12-10 00:15:04.842856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.843034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.843067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.843265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.843299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.843413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.843447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.843701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.843733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.844008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.844041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.844172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.844207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.844335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.844368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.844502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.844535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.844656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.844690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.844814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.844847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.845025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.845059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.845255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.845290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.845468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.845502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.845604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.845637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.845759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.845793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.845971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.846004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.846211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.846245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.846370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.846404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.846513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.846545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.846668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.846702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.846875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.846908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.847085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.847119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.847265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.847299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.847425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.847458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.847572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.847606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.847791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.847825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.848025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.848062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.848193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.848229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.848350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.848383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.848631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.848663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.848769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.848802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.849012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.849046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.849187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.849222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.849431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.849464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.849582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.849615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.849792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.849825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.849950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.849983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.850110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.850150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.850348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.850381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.850576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.850610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.850728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.850761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.850876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.850909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.851086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.851119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.851305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.851339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.851514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.851546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.851718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.851761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.851959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.851992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.852190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.852224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.852410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.852443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.852731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.852764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.852941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.852975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.048 qpair failed and we were unable to recover it. 00:33:30.048 [2024-12-10 00:15:04.853251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.048 [2024-12-10 00:15:04.853286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.853499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.853532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.853724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.853758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.853933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.853965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.854141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.854184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.854305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.854338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.854513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.854547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.854742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.854775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.854961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.854995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.855102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.855135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.855254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.855288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.855502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.855536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.855710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.855744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.855938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.855971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.856219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.856254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.856464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.856497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.856716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.856748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.856950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.856984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.857230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.857264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.857572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.857605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.857809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.857841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.858031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.858065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.858182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.858217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.858410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.858443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.858708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.858742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.858925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.858959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.859203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.859237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.859460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.859494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.859744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.859777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.859962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.859995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.860190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.860224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.860353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.860386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.860665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.860697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.860885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.860918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.861103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.861136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.861413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.861447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.861568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.861601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.861798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.861830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.861950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.861984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.862147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.862192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.862443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.862477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.862673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.862706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.862883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.862915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.863095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.863128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.049 [2024-12-10 00:15:04.863348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.049 [2024-12-10 00:15:04.863382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.049 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.863586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.863619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.863803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.863836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.864015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.864048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.864153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.864196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.864469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.864503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.864697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.864731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.865037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.865071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.865347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.865381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.865503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.865542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.865757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.865789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.865899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.865932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.866233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.866267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.866566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.866598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.866865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.866898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.867007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.867040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.867275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.867309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.867425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.867458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.867580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.867613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.867813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.867846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.867965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.867998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.868179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.868213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.868486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.868520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.868650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.868684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.868958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.868992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.869204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.869239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.869431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.869466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.869646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.869679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.869860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.869894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.870077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.870111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.870325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.870360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.870486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.870520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.870769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.870803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.870995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.871030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.871209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.871245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.871433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.871467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.871596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.871631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.871900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.871933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.872063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.872097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.872218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.872251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.872479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.872514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.872724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.872759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.872935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.872970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.873151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.873197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.873322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.873356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.873548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.873581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.873763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.873796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.873916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.873950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.874129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.874175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.874372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.874412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.874585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.874618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.874894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.874928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.875209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.050 [2024-12-10 00:15:04.875244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.050 qpair failed and we were unable to recover it. 00:33:30.050 [2024-12-10 00:15:04.875422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.875454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.875727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.875760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.875945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.875978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.876165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.876198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.876381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.876414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.876592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.876626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.876755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.876789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.876914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.876946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.877068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.877101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.877235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.877269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.877400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.877434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.877619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.877652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.877925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.877958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.878240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.878276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.878530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.878564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.878813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.878847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.879026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.879060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.879266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.879300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.879481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.879515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.879706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.879740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.879930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.879964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.880146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.880190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.880371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.880408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.880591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.880626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.880840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.880874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.881131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.881189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.881303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.881336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.881513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.881547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.881800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.881833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.881958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.881992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.882179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.882214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.882402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.882442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.882643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.882675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.882855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.882888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.882996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.883030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.883220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.883255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.883436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.883476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.883765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.883799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.884092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.884126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.884327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.884362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.884573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.884608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.884731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.884764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.884956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.884990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.885200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.885235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.885497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.885531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.885731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.885764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.886034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.886069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.886344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.886378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.886668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.886703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.886988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.887022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.887247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.887281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.887482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.887517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.887704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.051 [2024-12-10 00:15:04.887738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.051 qpair failed and we were unable to recover it. 00:33:30.051 [2024-12-10 00:15:04.888013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.888048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.888274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.888309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.888519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.888553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.888740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.888776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.888967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.889001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.889278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.889314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.889436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.889471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.889646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.889680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.889856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.889890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.890066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.890100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.890395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.890431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.890671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.890748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.890971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.891010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.891222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.891259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.891516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.891550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.891685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.891719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.891851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.891886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.892139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.892192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.892393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.892430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.892609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.892642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.892835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.892868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.893074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.893109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.893260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.893297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.893548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.893591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.052 [2024-12-10 00:15:04.893769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.052 [2024-12-10 00:15:04.893803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.052 qpair failed and we were unable to recover it. 00:33:30.346 [2024-12-10 00:15:04.893993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-12-10 00:15:04.894027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-12-10 00:15:04.894208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-12-10 00:15:04.894244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-12-10 00:15:04.894384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-12-10 00:15:04.894417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-12-10 00:15:04.894619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-12-10 00:15:04.894654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-12-10 00:15:04.894932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-12-10 00:15:04.894966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-12-10 00:15:04.895171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-12-10 00:15:04.895207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.346 [2024-12-10 00:15:04.895460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.346 [2024-12-10 00:15:04.895495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.346 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.895676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.895709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.895889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.895922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.896121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.896156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.896467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.896503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.896649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.896683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.896871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.896905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.897036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.897071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.897285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.897320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.897610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.897644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.897825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.897859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.898143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.898189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.898493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.898528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.898801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.898836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.899033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.899067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.899267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.899302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.899576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.899611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.899741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.899776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.899951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.899985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.900181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.900229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.900424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.900459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.900583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.900617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.900906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.900940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.901064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.901098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.347 qpair failed and we were unable to recover it. 00:33:30.347 [2024-12-10 00:15:04.901290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.347 [2024-12-10 00:15:04.901326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.901510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.901544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.901667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.901700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.901883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.901917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.902138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.902185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.902385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.902420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.902629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.902663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.902859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.902892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.903104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.903144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.903272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.903308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.903582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.903616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.903835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.903869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.904056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.904090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.904309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.904346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.904623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.904658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.904783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.904817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.905003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.905038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.905277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.905311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.905446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.905480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.905660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.905695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.905920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.905954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.906134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.906177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.906367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.906402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.906659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.906692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.906998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.907033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.907236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.907271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.348 [2024-12-10 00:15:04.907389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.348 [2024-12-10 00:15:04.907424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.348 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.907623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.907659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.907948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.907982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.908193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.908233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.908422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.908456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.908575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.908608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.908885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.908920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.909100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.909134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.909325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.909359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.909561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.909596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.909867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.909902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.910113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.910146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.910461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.910496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.910618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.910652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.910879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.910935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.911137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.911174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.911439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.911468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.911656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.911685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.911860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.911886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.912069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.912095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.912261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.912295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.912413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.912439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.912603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.912635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.912751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.912773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.912927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.912952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.913107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.349 [2024-12-10 00:15:04.913131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.349 qpair failed and we were unable to recover it. 00:33:30.349 [2024-12-10 00:15:04.913302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.913328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.913498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.913521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.913771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.913795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.914002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.914037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.914177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.914214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.914408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.914443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.914645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.914680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.914903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.914937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.915152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.915202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.915385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.915419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.915540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.915576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.915782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.915817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.916016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.916050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.916309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.916346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.916543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.916577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.916757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.916792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.916976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.917000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.917094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.917115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.917216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.917238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.917473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.917508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.917633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.917668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.917878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.917913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.918170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.918194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.918459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.918483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.918666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.918689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.918967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.350 [2024-12-10 00:15:04.918990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.350 qpair failed and we were unable to recover it. 00:33:30.350 [2024-12-10 00:15:04.919089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.919111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.919374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.919409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.919709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.919744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.920055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.920080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.920288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.920321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.920432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.920464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.920653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.920685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.920865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.920897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.921181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.921215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.921396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.921429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.921623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.921664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.921925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.921970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.922168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.922201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.922413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.922445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.922626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.922658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.922768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.922798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.922909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.922940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.923133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.923174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.923445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.923478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.923676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.923708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.923977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.924010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.924226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.924260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.924378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.924410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.924604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.924637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.924821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.924853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.925128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.925196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.351 [2024-12-10 00:15:04.925333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.351 [2024-12-10 00:15:04.925369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.351 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.925553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.925588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.925723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.925757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.925951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.925983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.926186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.926220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.926420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.926451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.926638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.926672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.926875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.926910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.927190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.927226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.927410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.927445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.927570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.927604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.927945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.928027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.928356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.928395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.928531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.928566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.928853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.928888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.929073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.929106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.929320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.929356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.929565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.929598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.929779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.929813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.929925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.929956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.930098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.930131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.930337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.930371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.930506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.930540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.930800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.930833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.931015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.931049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.931241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.931277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.352 qpair failed and we were unable to recover it. 00:33:30.352 [2024-12-10 00:15:04.931389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.352 [2024-12-10 00:15:04.931420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.931621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.931655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.931789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.931823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.932017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.932050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.932230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.932264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.932407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.932441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.932624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.932657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.932863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.932897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.933003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.933036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.933150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.933194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.933378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.933411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.933615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.933649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.933772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.933805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.934058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.934093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.934357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.934393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.934600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.934634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.934813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.934846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.935033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.935069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.935250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.935284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.935399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.935433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.935685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.353 [2024-12-10 00:15:04.935719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.353 qpair failed and we were unable to recover it. 00:33:30.353 [2024-12-10 00:15:04.936007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.936040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.936261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.936296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.936412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.936446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.936625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.936659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.936886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.936921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.937209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.937244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.937450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.937484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.937606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.937640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.937824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.937858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.938133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.938175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.938455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.938490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.938677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.938710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.938937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.938971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.939134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.939178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.939385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.939418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.939617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.939650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.939878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.939913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.940115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.940149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.940382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.940422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.940618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.940651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.940785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.940819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.941070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.941104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.941300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.941334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.941514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.941547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.941726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.941760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.941966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.941999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.942287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.942323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.942630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.942666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.354 qpair failed and we were unable to recover it. 00:33:30.354 [2024-12-10 00:15:04.942777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.354 [2024-12-10 00:15:04.942811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.943014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.943046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.943229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.943264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.943456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.943490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.943699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.943734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.943871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.943906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.944188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.944223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.944523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.944562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.944769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.944804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.944986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.945022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.945216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.945252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.945434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.945468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.945646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.945680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.945884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.945918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.946104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.946138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.946353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.946388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.946596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.946629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.946904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.946945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.947201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.947237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.947428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.947463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.947641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.947675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.947933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.947967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.948239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.948274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.948556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.948592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.948890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.948924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.949052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.949086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.949362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.949397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.949513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.949546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.949745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.949779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.950058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.950093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.950324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.950359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.950549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.950584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.355 qpair failed and we were unable to recover it. 00:33:30.355 [2024-12-10 00:15:04.950740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.355 [2024-12-10 00:15:04.950774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.951032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.951065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.951248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.951284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.951415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.951451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.951651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.951685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.951913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.951947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.952148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.952192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.952403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.952436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.952551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.952585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.952806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.952841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.953041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.953076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.953272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.953307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.953531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.953566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.953684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.953717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.953960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.953993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.954248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.954284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.954592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.954626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.954903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.954937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.955221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.955255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.955437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.955472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.955596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.955631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.955897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.955931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.956147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.956201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.956462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.956497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.956685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.956720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.956837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.956871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.957136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.957186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.957316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.356 [2024-12-10 00:15:04.957351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.356 qpair failed and we were unable to recover it. 00:33:30.356 [2024-12-10 00:15:04.957473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.957509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.957706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.957741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.957871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.957905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.958029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.958064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.958245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.958282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.958461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.958495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.958693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.958727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.958907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.958942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.959150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.959194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.959314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.959348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.959535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.959570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.959826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.959860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.960064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.960099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.960231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.960275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.960472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.960506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.960623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.960656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.960887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.960923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.961045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.961079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.961260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.961294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.961482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.961516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.961693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.961727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.961836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.961869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.962049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.962082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.962362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.962397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.962603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.962638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.962931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.962972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.963083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.963117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.963275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.963316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.963576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.963611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.963788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.963822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.963999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.964034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.964235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.964270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.964476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.964512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.964705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.964742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.964870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.964904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.965177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.965213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.965410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.965446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.965573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.965607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.965740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.965774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.965957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.965992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.966130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.966181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.966426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.966460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.966642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.357 [2024-12-10 00:15:04.966680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.357 qpair failed and we were unable to recover it. 00:33:30.357 [2024-12-10 00:15:04.966892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.966925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.967105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.967139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.967272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.967307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.967465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.967500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.967694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.967730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.967860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.967896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.968087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.968121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.968273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.968310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.968435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.968468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.968594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.968635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.968768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.968801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.968991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.969024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.969151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.969198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.969479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.969520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.969803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.969837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.970042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.970076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.970259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.970294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.970488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.970522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.970661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.970697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.970837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.970872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.971085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.971119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.971240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.971276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.971391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.971424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.971633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.971668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.971973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.972006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.972289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.972324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.972436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.972470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.972583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.972616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.972802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.972835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.973090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.973124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.973370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.973406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.973611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.973644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.973862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.973895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.974211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.974247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.974523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.974557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.974834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.974868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.975093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.975139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.975401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.975437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.975643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.975677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.975882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.975916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.976095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.976128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.976249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.976285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.976521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.976556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.976751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.976785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.976970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.977004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.977286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.977320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.977588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.977621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.977747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.977781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.978033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.978068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.978299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.978333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.978592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.978627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.978942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.978976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.979235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.979269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.358 [2024-12-10 00:15:04.979528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.358 [2024-12-10 00:15:04.979561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.358 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.979791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.979824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.979967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.980000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.980183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.980218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.980351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.980385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.980589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.980623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.980812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.980846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.981024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.981058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.981294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.981329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.981586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.981618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.981803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.981836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.982121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.982155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.982375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.982408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.982592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.982626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.982927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.982960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.983225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.983260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.983559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.983593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.983783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.983817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.984044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.984077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.984270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.984304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.984507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.984540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.984817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.984850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.985034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.985067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.985224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.985258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.985377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.985411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.985691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.985724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.985851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.985884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.986084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.986118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.986308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.986343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.986528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.986562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.986693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.986727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.986932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.986964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.987153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.987217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.987408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.987442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.987704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.987737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.987954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.987988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.988125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.988169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.988349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.988382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.988589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.988622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.988910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.988943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.989167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.989203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.989485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.989518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.989788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.989822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.990001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.990034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.990212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.990247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.990363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.990394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.990614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.990647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.990845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.990878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.991087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.991120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.991258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.991293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.991486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.991519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.991641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.991681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.991956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.991990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.992201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.992236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.992414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.992447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.992569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.992603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.992755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.992788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.992994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.993027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.993215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.993251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.993454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.359 [2024-12-10 00:15:04.993488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.359 qpair failed and we were unable to recover it. 00:33:30.359 [2024-12-10 00:15:04.993668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.993700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.993831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.993864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.994088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.994121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.994258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.994293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.994546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.994579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.994719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.994753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.994950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.994984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.995204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.995241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.995493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.995528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.995857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.995891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.996126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.996169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.996390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.996424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.996555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.996588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.996802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.996835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.997036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.997070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.997204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.997239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.997492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.997527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.997666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.997699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.997897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.997936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.998059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.998093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.998304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.998338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.998528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.998561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.998754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.998788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.999010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.999043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.999175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.999212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.999497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.999531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.999712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.999746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:04.999869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:04.999902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.000084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.000118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.000328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.000366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.000628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.000663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.000842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.000876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.001032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.001068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.001273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.001309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.001491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.001525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.001722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.001755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.001886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.001920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.002048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.002082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.002228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.002263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.002402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.002435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.002629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.002662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.002850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.002886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.003017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.003051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.003185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.003221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.003479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.003513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.003708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.003741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.003885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.003920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.004048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.004082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.004264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.004299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.004480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.004514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.004628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.004662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.004789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.004822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.004996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.005030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.005308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.005343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.005525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.005559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.005693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.005726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.005842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.005874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.006054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.006087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.360 qpair failed and we were unable to recover it. 00:33:30.360 [2024-12-10 00:15:05.006203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.360 [2024-12-10 00:15:05.006238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.006442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.006476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.006588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.006621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.006749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.006783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.006898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.006932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.007121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.007154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.007393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.007426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.007605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.007638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.007917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.007950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.008072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.008105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.008223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.008258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.008512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.008545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.008761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.008795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.008908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.008943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.009064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.009098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.009240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.009275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.009514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.009547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.009672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.009705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.009834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.009868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.010003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.010037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.010156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.010207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.010339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.010372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.010554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.010587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.010714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.010749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.010951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.010985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.011105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.011139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.011281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.011315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.011445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.011479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.011664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.011703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.011862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.011897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.012074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.012109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.012239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.012274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.012402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.012435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.012558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.012592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.012788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.012821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.013031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.013065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.013246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.013281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.013467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.013501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.013621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.013655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.013774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.013807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.014026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.014060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.014320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.014355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.014483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.014515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.014715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.014748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.014881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.014916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.015101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.361 [2024-12-10 00:15:05.015134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.361 qpair failed and we were unable to recover it. 00:33:30.361 [2024-12-10 00:15:05.015268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.015304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.015491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.015524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.015638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.015672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.015858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.015892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.016013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.016046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.016242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.016275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.016458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.016491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.016604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.016638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.016748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.016782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.016892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.016931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.017046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.017080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.017212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.017247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.017374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.017413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.017661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.017692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.017870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.017904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.018031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.018064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.018267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.018302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.018504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.018537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.018649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.018682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.018875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.018908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.019051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.019085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.019283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.019317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.019431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.019465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.019662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.019696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.019812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.019845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.019954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.019987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.020172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.020207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.020388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.020422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.020608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.020642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.020766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.020799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.020908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.020942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.021122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.021155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.021399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.021434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.021553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.021586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.021763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.021797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.021907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.021940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.022150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.022204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.022349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.022383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.022496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.022530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.022734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.022768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.022888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.022922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.023107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.023141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.023352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.023386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.023597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.023630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.023766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.023799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.023919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.023952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.024142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.024186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.024364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.024398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.024516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.024550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.024678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.362 [2024-12-10 00:15:05.024712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.362 qpair failed and we were unable to recover it. 00:33:30.362 [2024-12-10 00:15:05.024846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.024879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.025057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.025092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.025267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.025301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.025420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.025454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.025582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.025615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.025836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.025870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.025987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.026021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.026232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.026267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.026541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.026575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.026707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.026740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.026867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.026901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.027012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.027046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.027152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.363 [2024-12-10 00:15:05.027195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.363 qpair failed and we were unable to recover it. 00:33:30.363 [2024-12-10 00:15:05.027312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.027345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.027481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.027515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.027634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.027667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.027888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.027921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.028099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.028133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.028256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.028290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.028496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.028530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.028733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.028766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.028947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.028980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.029088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.029121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.029330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.029365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.029546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.029580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.029707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.029742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.029861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.029894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.030075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.030109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.030329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.030364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.030570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.030604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.030714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.030749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.030950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.030984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.364 qpair failed and we were unable to recover it. 00:33:30.364 [2024-12-10 00:15:05.031093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.364 [2024-12-10 00:15:05.031124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.031378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.031413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.031705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.031739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.031848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.031882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.032060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.032093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.032223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.032258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.032436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.032470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.032579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.032612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.032806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.032839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.033136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.033179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.033302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.033335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.033451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.033484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.033611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.033644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.033845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.033878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.034075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.034109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.034302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.034335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.034589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.034623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.034813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.034847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.034969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.035002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.035109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.035143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.035277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.035311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.035433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.035466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.035648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.035686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.035812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.035845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.036029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.036062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.036237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.036272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.036381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.036415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.036520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.036553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.036728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.036761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.036935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.036967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.037173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.037207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.037407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.037440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.037619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.037652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.037880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.037913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.038031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.038064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.038178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.038213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.038344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.038378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.038554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.038588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.038833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.038866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.038985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.039018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.039146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.039203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.039326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.039360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.039540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.039572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.039766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.039798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.039910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.039943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.040134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.040178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.040358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.040393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.040611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.040644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.040835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.040868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.041072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.041111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.041230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.041264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.041370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.041402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.041591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.041623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.041736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.041769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.041888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.041920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.042114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.042147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.365 [2024-12-10 00:15:05.042272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.365 [2024-12-10 00:15:05.042305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.365 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.042506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.042541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.042661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.042693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.042880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.042913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.043108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.043142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.043330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.043363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.043480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.043514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.043643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.043677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.043795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.043827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.044043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.044076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.044207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.044241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.044368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.044401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.044512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.044544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.044726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.044759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.044957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.044989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.045129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.045169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.045298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.045331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.045446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.045478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.045598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.045631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.045879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.045913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.046042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.046075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.046212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.046246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.046377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.046410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.046531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.046564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.046675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.046708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.046907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.046939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.047050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.047082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.047256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.047290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.047464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.047498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.047672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.047705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.047956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.047990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.048182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.048216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.048333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.048365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.048485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.048518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.048697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.048730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.048859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.048892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.049065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.049099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.049228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.049261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.049388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.049421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.049541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.049574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.049748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.049781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.049886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.049919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.050092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.050125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.050403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.050438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.050612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.050645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.050821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.050853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.051053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.051086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.051206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.051240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.051422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.051456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.051580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.051612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.051785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.051818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.052018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.052050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.052184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.366 [2024-12-10 00:15:05.052218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.366 qpair failed and we were unable to recover it. 00:33:30.366 [2024-12-10 00:15:05.052354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.052388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.052520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.052552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.052819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.052852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.053028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.053060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.053253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.053287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.053459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.053491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.053667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.053699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.053954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.053987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.054106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.054144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.054276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.054309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.054424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.054457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.054637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.054670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.054847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.054880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.055002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.055036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.055239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.055272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.055461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.055494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.055611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.055644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.055842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.055875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.055999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.056032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.056209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.056246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.056356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.056393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.056516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.056547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.056773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.056805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.056985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.057019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.057200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.057233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.057407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.057439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.057615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.057646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.057916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.057949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.058126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.058169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.058288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.058320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.058489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.058521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.058728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.058760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.058865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.058897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.059078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.059112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.059307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.059340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.059461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.059504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.059677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.059709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.059878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.059910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.060025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.060057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.060234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.060267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.060464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.060496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.060765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.060798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.061043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.061074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.061263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.061296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.061509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.061541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.061720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.061752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.062027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.062059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.062251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.062285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.062466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.062498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.062731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.062763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.062886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.062918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.063176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.063210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.063340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.063372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.063555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.063586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.063719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.063751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.063998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.064049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.064175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.064209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.064384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.064414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.064589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.064622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.064841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.064873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.064976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.065009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.065115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.065146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.065257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.065295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.065419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.065453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.367 [2024-12-10 00:15:05.065566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.367 [2024-12-10 00:15:05.065598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.367 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.065773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.065804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.065995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.066027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.066221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.066255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.066444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.066475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.066656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.066687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.066811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.066843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.067041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.067072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.067199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.067232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.067418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.067450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.067586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.067618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.067742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.067773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.067974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.068007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.068121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.068152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.068333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.068365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.068556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.068590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.068701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.068732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.068949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.068980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.069195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.069227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.069343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.069376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.069479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.069510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.069685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.069717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.069835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.069867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.070051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.070084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.070218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.070251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.070408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.070440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.070622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.070653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.070775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.070809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.070988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.071018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.071237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.071269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.071390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.071422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.071530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.071564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.071666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.071697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.071864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.071893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.072086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.072120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.072340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.072373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.072520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.072552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.072658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.072689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.072799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.072833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.072949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.072981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.073177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.073208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.073342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.073371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.073506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.073538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.073716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.073748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.073865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.073897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.074013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.074044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.074264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.074297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.074517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.074550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.074741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.074772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.074959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.074991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.075117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.075148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.075357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.075391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.075506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.075537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.075713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.075745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.075858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.075889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.076105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.076138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.076263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.076294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.076406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.076437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.076557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.076588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.076776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.076808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.076986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.077017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.077133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.077175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.077348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.077379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.077559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.077591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.368 [2024-12-10 00:15:05.077759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.368 [2024-12-10 00:15:05.077790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.368 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.077908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.077939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.078067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.078103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.078233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.078267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.078375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.078406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.078597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.078630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.078810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.078841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.079041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.079072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.079245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.079278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.079399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.079431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.079556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.079586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.079695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.079728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.079903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.079934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.080075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.080107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.080298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.080351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.080528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.080560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.080701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.080733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.080994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.081025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.081169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.081202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.081385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.081416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.081522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.081554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.081664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.081694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.081914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.081947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.082051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.082081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.082188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.082220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.082356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.082388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.082507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.082539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.082643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.082673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.082775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.082807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.082924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.082962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.083187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.083220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.083356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.083387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.083501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.083532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.083647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.083679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.083793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.083826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.083929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.083958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.084077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.084104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.084218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.084246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.084411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.084439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.084549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.084576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.084826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.084856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.084958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.084986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.085105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.085133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.085248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.085278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.085449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.085479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.085661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.085689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.085810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.085839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.085943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.085971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.086073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.086103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.086221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.086249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.369 [2024-12-10 00:15:05.086367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.369 [2024-12-10 00:15:05.086396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.369 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.086503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.086531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.086627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.086656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.086762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.086792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.086973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.087001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.087089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.087119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.087262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.087290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.087408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.087440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.087553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.087580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.087741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.087768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.087894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.087922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.088018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.088045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.088168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.088198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.088308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.088336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.088459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.088489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.088685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.088713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.088812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.088840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.089012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.089041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.089172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.089203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.089308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.089337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.089532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.089561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.089664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.089692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.089798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.089828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.090023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.090049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.090193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.090223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.090326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.090353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.090464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.090492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.090599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.090630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.090738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.090769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.090867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.090897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.091009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.091040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.091211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.091240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.091368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.091397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.091492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.091520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.091633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.091664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.091762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.091790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.091956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.091984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.092096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.092125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.092234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.092266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.092366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.092394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.092499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.092526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.092621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.092648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.092808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.092837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.092934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.092961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.093074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.093103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.093210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.093240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.093340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.093367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.093495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.093545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.093716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.093740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.093894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.093918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.094016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.094040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.094197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.094223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.094330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.094355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.094448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.094471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.094571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.094595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.094687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.094717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.094837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.094860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.094956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.094980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.095072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.095095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.095203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.095230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.095341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.095367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.095608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.095632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.370 [2024-12-10 00:15:05.095727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.370 [2024-12-10 00:15:05.095753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.370 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.095844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.095867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.095967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.095994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.096105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.096133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.096241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.096265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.096356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.096380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.096551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.096576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.096668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.096692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.096778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.096804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.096901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.096926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.097030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.097056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.097214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.097240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.097324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.097352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.097440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.097463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.097547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.097570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.097672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.097699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.097803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.097826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.097912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.097936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.098047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.098071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.098181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.098206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.098300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.098323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.098477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.098502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.098590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.098613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.098701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.098725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.098808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.098830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.098910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.098932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.099027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.099049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.099131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.099155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.099287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.099312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.099403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.099426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.099525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.099548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.099632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.099655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.099761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.099787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.099942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.099966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.100125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.100149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.100262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.100286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.100382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.100408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.100500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.100523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.100621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.100644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.100732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.100760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.100866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.100889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.101062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.101086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.101239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.101265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.101357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.101382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.101535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.101559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.101661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.101687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.101844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.101869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.101956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.101978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.102135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.102165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.102265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.102287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.102444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.102469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.102561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.102583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.102688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.102711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.102806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.102830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.103025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.103050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.103155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.103186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.103365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.103389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.103490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.103514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.103673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.103699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.103796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.103818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.371 [2024-12-10 00:15:05.103898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.371 [2024-12-10 00:15:05.103917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.371 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.103999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.104019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.104109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.104129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.104313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.104336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.104417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.104436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.104534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.104558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.104655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.104674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.104769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.104809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.104890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.104909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.105078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.105099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.105184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.105204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.105286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.105307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.105386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.105406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.105510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.105529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.105610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.105629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.105771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.105795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.105901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.105920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.106008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.106032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.106123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.106143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.106241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.106261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.106372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.106396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.372 qpair failed and we were unable to recover it. 00:33:30.372 [2024-12-10 00:15:05.106478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.372 [2024-12-10 00:15:05.106498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-12-10 00:15:05.106643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-12-10 00:15:05.106665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-12-10 00:15:05.106751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-12-10 00:15:05.106771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-12-10 00:15:05.106926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-12-10 00:15:05.106947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-12-10 00:15:05.107035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-12-10 00:15:05.107054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-12-10 00:15:05.107154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-12-10 00:15:05.107203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-12-10 00:15:05.107375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-12-10 00:15:05.107398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-12-10 00:15:05.107551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-12-10 00:15:05.107572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-12-10 00:15:05.107658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-12-10 00:15:05.107679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-12-10 00:15:05.107778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-12-10 00:15:05.107796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.373 qpair failed and we were unable to recover it. 00:33:30.373 [2024-12-10 00:15:05.107896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.373 [2024-12-10 00:15:05.107917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.108020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.108045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.108137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.108168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.108272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.108291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.108382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.108403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.108483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.108503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.108588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.108608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.108759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.108779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.108860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.108879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.108966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.108986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.109128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.109152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.109310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.109329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.109412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.109433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.109518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.109538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.109636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.109656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.109816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.109836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.109912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.109935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.110017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.110037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.110113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.110133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.110335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.110357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.110464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.110485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.110646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.110668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.110767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.110788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.110886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.110905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.110985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.111007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.111083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.111102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.111199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.111219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.374 qpair failed and we were unable to recover it. 00:33:30.374 [2024-12-10 00:15:05.111294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.374 [2024-12-10 00:15:05.111314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.111438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.111460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.111558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.111577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.111684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.111704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.111805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.111825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.111915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.111935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.112042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.112063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.112143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.112169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.112260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.112280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.112364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.112383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.112462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.112484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.112573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.112593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.112670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.112689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.112773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.112794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.112891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.112910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.112995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.113014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.113237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.113263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.113358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.113378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.113458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.113477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.113576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.113596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.113678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.113698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.113781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.113801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.113879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.113898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.113992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.114011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.114089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.114109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.114195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.114214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.114299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.114317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.114391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.114410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.114484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.114502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.114580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.375 [2024-12-10 00:15:05.114600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.375 qpair failed and we were unable to recover it. 00:33:30.375 [2024-12-10 00:15:05.114693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.114712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.114796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.114814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.114890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.114908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.115046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.115064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.115222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.115242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.115331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.115350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.115429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.115447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.115587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.115606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.115685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.115703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.115854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.115875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.115955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.115973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.116062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.116080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.116163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.116183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.116296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.116314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.116400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.116419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.116499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.116517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.116665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.116686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.116791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.116811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.116963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.116982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.117172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.117192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.117291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.117319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.117463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.117482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.117558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.117576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.117659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.117677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.117714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d7b20 (9): Bad file descriptor 00:33:30.376 [2024-12-10 00:15:05.118133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.118218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.118361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.118397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.118509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.118541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.376 [2024-12-10 00:15:05.118678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.376 [2024-12-10 00:15:05.118710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.376 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.118815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.118847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.118950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.118972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.119130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.119150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.119244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.119262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.119373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.119391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.119473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.119491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.119567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.119585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.119663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.119682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.119755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.119773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.119847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.119865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.119945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.119964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.120054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.120072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.120150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.120200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.120289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.120308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.120393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.120411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.120497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.120517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.120658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.120678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.120763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.120781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.120871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.120889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.120963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.120984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.121066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.121085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.121177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.121197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.121272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.121290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.121433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.121451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.121539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.121558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.121744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.121765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.121924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.121944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.122018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.122036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.377 qpair failed and we were unable to recover it. 00:33:30.377 [2024-12-10 00:15:05.122121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.377 [2024-12-10 00:15:05.122141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.122306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.122327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.122409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.122427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.122578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.122599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.122682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.122701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.122792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.122812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.122951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.122970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.123047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.123066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.123224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.123245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.123398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.123418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.123502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.123520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.123609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.123640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.123742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.123765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.123871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.123893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.123974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.123994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.124147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.124173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.124251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.124271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.124346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.124367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.124459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.124480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.124631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.124654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.124745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.124764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.124938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.124960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.125044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.125063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.125168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.125188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.125288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.125311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.125407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.125427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.125572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.125593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.125674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.125692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.125778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.125797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.378 [2024-12-10 00:15:05.125947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.378 [2024-12-10 00:15:05.125969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.378 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.126060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.126078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.126238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.126260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.126354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.126373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.126465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.126490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.126583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.126602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.126689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.126710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.126786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.126806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.126897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.126919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.126993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.127017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.127093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.127113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.127216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.127238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.127399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.127420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.127505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.127525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.127617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.127637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.127729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.127749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.127822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.127842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.127913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.127935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.128015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.128035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.128105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.128127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.128231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.128253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.379 [2024-12-10 00:15:05.128330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.379 [2024-12-10 00:15:05.128350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.379 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.128492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.128515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.128602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.128622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.128715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.128735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.128816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.128835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.129054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.129074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.129167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.129188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.129331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.129352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.129431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.129450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.129598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.129619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.129713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.129732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.129891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.129912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.129994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.130014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.130099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.130118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.130194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.130213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.130308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.130335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.130425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.130446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.130522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.130542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.130616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.130635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.130710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.130730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.130892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.130914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.131012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.131032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.131120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.131139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.131299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.131323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.131411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.131430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.131519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.131539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.131617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.131637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.131726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.131745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.131820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.131840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.131934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.131954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.132051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.380 [2024-12-10 00:15:05.132070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.380 qpair failed and we were unable to recover it. 00:33:30.380 [2024-12-10 00:15:05.132168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.132188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.132337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.132357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.132436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.132456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.132547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.132566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.132667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.132688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.132831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.132852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.132932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.132951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.133100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.133118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.133216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.133235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.133326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.133346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.133560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.133580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.133663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.133685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.133785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.133803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.133876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.133894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.133985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.134004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.134092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.134110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.134187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.134207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.134284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.134302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.134383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.134402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.134495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.134513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.134611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.134633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.134709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.134728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.134799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.134818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.134974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.134995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.135083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.135102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.135195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.135216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.135308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.135326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.135411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.135430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.135518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.135536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.135616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.135634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.135794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.135813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.135889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.135908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.135992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.136011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.136086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.136104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.136352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.136374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.136539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.136559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.136700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.136722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.136817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.136836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.136981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.137001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.137086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.137106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.137185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.137205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.137295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.137315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.137395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.137414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.137489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.137507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.137600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.137619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.137711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.137730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.137814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.137833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.137921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.137939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.138014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.138032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.138197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.138219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.138301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.138320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.138398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.138417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.138568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.138593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.138684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.138703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.138798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.381 [2024-12-10 00:15:05.138817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.381 qpair failed and we were unable to recover it. 00:33:30.381 [2024-12-10 00:15:05.138899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.138918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.139024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.139045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.139135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.139154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.139250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.139268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.139358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.139376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.139452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.139472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.139547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.139566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.139654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.139672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.139754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.139772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.139846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.139864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.140032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.140052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.140134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.140154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.140235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.140254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.140332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.140350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.140504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.140523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.140611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.140630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.140726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.140745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.140837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.140856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.140936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.140954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.141099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.141119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.141216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.141234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.141379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.141398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.141587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.141607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.141755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.141775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.141926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.141950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.142031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.142051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.142127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.142146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.142229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.142247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.142329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.142347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.142434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.142453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.142527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.142547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.142696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.142717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.142798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.142817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.142965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.142985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.143067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.143086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.143180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.143200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.143279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.143298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.143388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.143407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.143484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.143504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.143587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.143605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.143686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.143705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.143794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.143813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.143955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.143974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.144053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.144071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.144169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.144189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.144279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.144297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.144465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.144485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.144625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.144644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.144736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.144754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.144826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.144843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.144921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.144940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.145096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.145116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.145219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.145237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.145310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.145328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.145463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.145482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.145557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.145575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.145647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.145665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.145741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.145759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.145843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.145860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.145933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.145951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.146039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.146057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.146139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.146176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.146264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.146282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.382 qpair failed and we were unable to recover it. 00:33:30.382 [2024-12-10 00:15:05.146356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.382 [2024-12-10 00:15:05.146373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.146468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.146485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.146563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.146581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.146719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.146737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.146807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.146824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.146915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.146932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.147021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.147038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.147118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.147135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.147214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.147233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.147305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.147323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.147393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.147411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.147486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.147505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.147583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.147601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.147759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.147778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.147920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.147938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.148122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.148143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.148231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.148250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.148391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.148409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.148491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.148509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.148649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.148667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.148760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.148779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.148874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.148892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.148966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.148983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.149073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.149091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.149170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.149188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.149291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.149309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.149452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.149470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.149558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.149576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.149646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.149664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.149738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.149759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.149841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.149858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.149950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.149968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.150109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.150129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.150242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.150261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.150478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.150497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.150580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.150598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.150736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.150756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.150843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.150861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.150935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.150953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.151037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.151054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.151141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.151163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.151236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.151254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.151331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.151350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.151425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.151443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.151530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.151547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.383 [2024-12-10 00:15:05.151638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.383 [2024-12-10 00:15:05.151654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.383 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.151722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.151741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.151813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.151830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.151904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.151921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.152010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.152028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.152104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.152122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.152193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.152212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.152353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.152371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.152441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.152459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.152545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.152562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.152774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.152794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.152870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.152893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.152976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.152994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.153151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.153176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.153275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.153293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.153368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.153386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.153462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.153479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.153567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.153585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.153658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.153675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.153764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.153783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.153863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.153880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.153959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.153977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.154128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.154146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.154252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.154270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.154351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.154370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.154447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.154465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.154612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.154628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.154702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.154719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.154862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.154879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.154964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.154981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.155055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.155072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.155210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.155230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.155305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.155321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.155468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.155485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.155570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.155587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.155667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.155683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.155779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.155796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.155877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.155894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.155978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.155998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.156067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.156083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.156163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.156179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.156263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.156284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.156361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.156378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.156548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.156564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.156708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.156725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.156795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.156811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.156904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.156920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.157057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.157073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.157163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.157180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.157319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.157336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.157413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.157430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.157505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.157521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.157646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.157663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.157735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.157751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.157830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.157847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.157937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.157954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.158036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.158053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.158211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.158229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.158300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.158317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.158397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.158414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.384 qpair failed and we were unable to recover it. 00:33:30.384 [2024-12-10 00:15:05.158480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.384 [2024-12-10 00:15:05.158497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.158723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.158742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.158825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.158842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.158931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.158947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.159036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.159053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.159131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.159147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.159259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.159276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.159418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.159435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.159531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.159547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.159630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.159646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.159793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.159812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.159888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.159905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.159987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.160003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.160103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.160120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.160196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.160213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.160356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.160375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.160537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.160554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.160645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.160662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.160740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.160756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.160845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.160865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.160937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.160954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.161093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.161110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.161185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.161203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.161273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.161290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.161380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.161397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.161485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.161501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.161575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.161592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.161669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.161685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.385 qpair failed and we were unable to recover it. 00:33:30.385 [2024-12-10 00:15:05.161829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.385 [2024-12-10 00:15:05.161846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.161947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.161971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.162108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.162125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.162205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.162222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.162306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.162323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.162476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.162494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.162584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.162601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.162681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.162697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.162769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.162786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.162853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.162869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.163017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.163034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.163117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.163134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.163278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.163297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.163369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.163387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.163457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.163474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.163550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.163566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.163707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.163725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.163799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.163815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.163893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.163913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.163987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.164004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.164073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.164089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.164174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.164191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.164276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.164293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.164436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.164453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.164551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.164567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.164713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.164736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.164825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.386 [2024-12-10 00:15:05.164841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.386 qpair failed and we were unable to recover it. 00:33:30.386 [2024-12-10 00:15:05.164917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.164934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.165012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.165028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.165097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.165113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.165254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.165273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.165367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.165384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.165558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.165575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.165715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.165733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.165816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.165832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.165921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.165938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.166008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.166025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.166130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.166146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.166247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.166264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.166335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.166351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.166432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.166449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.166519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.166535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.166616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.166632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.166715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.166732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.166812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.166829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.166898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.166917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.167066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.167082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.167151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.167194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.167286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.167302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.167372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.167388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.167468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.167485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.167560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.167577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.167660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.167676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.167754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.167770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.167846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.167863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.167941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.167957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.168035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.387 [2024-12-10 00:15:05.168051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.387 qpair failed and we were unable to recover it. 00:33:30.387 [2024-12-10 00:15:05.168123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.168140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.168225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.168247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.168330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.168346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.168434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.168450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.168553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.168569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.168657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.168674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.168759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.168776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.168863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.168882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.168953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.168969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.169112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.169128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.169212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.169229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.169314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.169330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.169401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.169418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.169504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.169522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.169605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.169621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.169699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.169716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.169800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.169818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.169897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.169914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.169998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.170014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.170171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.170189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.170271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.170287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.170358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.170373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.170455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.170473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.170549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.170566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.170647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.170663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.170750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.170767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.170855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.170872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.170941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.170958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.171034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.171050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.171278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.171342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.171469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.171500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.171613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.171640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.171800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.171828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.171923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.171953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.172127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.172156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.172268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.172299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.172407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.172435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.172549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.172579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.172671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.172692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.172767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.172784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.172871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.172887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.173040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.173057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.173150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.173175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.173320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.173340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.173486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.173504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.173642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.173660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.173735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.173752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.173836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.173853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.173924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.173940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.174013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.174029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.174106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.174123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.174279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.174296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.174437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.174455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.174541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.174557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.174641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.174657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.174749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.174766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.174843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.174859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.388 qpair failed and we were unable to recover it. 00:33:30.388 [2024-12-10 00:15:05.174998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.388 [2024-12-10 00:15:05.175015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.175092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.175122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.175230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.175249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.175341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.175360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.175440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.175458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.175538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.175558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.175702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.175721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.175816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.175836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.175915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.175935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.176011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.176030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.176108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.176126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.176285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.176307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.176391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.176411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.176510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.176529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.176622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.176640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.176724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.176745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.176833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.176853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.177023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.177043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.177120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.177140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.177273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.177322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.177437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.177473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.177653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.177685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.177790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.177813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.177908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.177930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.178011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.178031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.178124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.178144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.178340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.178377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.178475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.178500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.178615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.178638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.178717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.178740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.178827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.178850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.178940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.178964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.179078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.179111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.179214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.179238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.179333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.179356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.179454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.179477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.179642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.179665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.179762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.179784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.179870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.179893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.179983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.180004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.180093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.180112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.180201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.180220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.180313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.180332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.180410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.180428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.180517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.180536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.180612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.180631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.180706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.180724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.180795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.180813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.180892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.180910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.180991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.181009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.181083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.181102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.181180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.181199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.181293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.181312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.181430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.181473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.181585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.181610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.181780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.181803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.181903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.389 [2024-12-10 00:15:05.181926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.389 qpair failed and we were unable to recover it. 00:33:30.389 [2024-12-10 00:15:05.182018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.182040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.182125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.182150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.182246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.182270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.182423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.182448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.182546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.182569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.182657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.182682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.182865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.182890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.182984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.183008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.183094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.183116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.183201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.183225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.183307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.183326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.183415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.183436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.183518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.183536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.183612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.183631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.183720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.183739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.183817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.183835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.183936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.183955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.184029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.184047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.184123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.184141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.184228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.184259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.184348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.184367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.184451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.184469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.184604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.184624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.184711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.184730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.184807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.184831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.184922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.184946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.185044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.185073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.185300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.185327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.185499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.185526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.185630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.185656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.185758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.185784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.185948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.185974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.186083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.186109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.186216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.186243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.186334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.186360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.186464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.186490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.186592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.186617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.186776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.186801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.186909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.186934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.187044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.187070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.187226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.187253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.187347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.187373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.187489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.187515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.187679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.187705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.187804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.187830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.390 [2024-12-10 00:15:05.187936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.390 [2024-12-10 00:15:05.187962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.390 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.188056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.188081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.188181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.188208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.188297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.188323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.188420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.188446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.188550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.188578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.188681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.188705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.188801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.188825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.188909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.188932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.189018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.189045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.189135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.189168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.189256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.189279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.189383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.189408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.189498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.189521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.189612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.189636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.189723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.189747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.189833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.189857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.190022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.190046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.190132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.190171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.190259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.190284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.190444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.190468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.190632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.190657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.190743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.190766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.190865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.190889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.190979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.191002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.191086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.191109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.191204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.191228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.191327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.191351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.191442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.191467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.191571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.191596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.191686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.191710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.191805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.191830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.191933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.191958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.192047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.192070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.391 [2024-12-10 00:15:05.192168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.391 [2024-12-10 00:15:05.192194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.391 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.192357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.192382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.192486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.192513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.192609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.192633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.192724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.192747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.192838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.192863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.193016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.193041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.193127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.193150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.193313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.193337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.193491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.193517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.193669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.193710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.193821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.193861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.194068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.194101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.194291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.194327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.194447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.194480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.194584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.194616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.194739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.194771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.194894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.194920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.195030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.195055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.195149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.195189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.195312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.195343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.195513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.195545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.195710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.195741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.195922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.195956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.196068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.196100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.196211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.196244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.196420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.196452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.196554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.196586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.196685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.196716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.196817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.196849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.196971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.197002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.197142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.197183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.197291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.197322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.197496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.197527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.197646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.197677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.197785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.197816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.197931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.197961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.198077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.198108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.198224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.198256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.198375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.198407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.198518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.198549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.198662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.198693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.198809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.198841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.198941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.198972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.199075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.199106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.199228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.199260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.199378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.199409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.199513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.199546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.199727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.199758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.199958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.199990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.200099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.200130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.200256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.200302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.200408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.200439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.200541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.200572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.200741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.200773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.200898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.200929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.201034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.201065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.201177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.201211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.201383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.201414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.201579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.201611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.201710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.201742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.201860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.201891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.202013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.202045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.202167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.202201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.202315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.202346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.202466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.202498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.202666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.202697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.202812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.202844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.202947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.202979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.392 qpair failed and we were unable to recover it. 00:33:30.392 [2024-12-10 00:15:05.203099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.392 [2024-12-10 00:15:05.203131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.203249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.203285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.203409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.203440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.203566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.203598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.203700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.203731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.203833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.203863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.203961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.203991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.204091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.204122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.204234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.204266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.204382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.204414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.204528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.204559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.204660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.204690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.204866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.204898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.205010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.205042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.205152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.205199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.205370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.205401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.205510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.205541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.205641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.205672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.205766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.205797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.205961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.205991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.206089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.206119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.206248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.206281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.206479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.206509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.206625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.206656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.206773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.206804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.206933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.206963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.207069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.207100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.207212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.207245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.207346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.207377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.207542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.207573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.207685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.207717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.207913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.207944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.208043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.208074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.208191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.208223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.208325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.208355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.208524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.208554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.208780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.208849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.208983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.209017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.209128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.209167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.209277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.209309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.209414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.209445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.209564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.209595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.209701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.209732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.209834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.209866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.209965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.209997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.210097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.210128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.210258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.210297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.210420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.210453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.210713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.210745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.210849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.210890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.210996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.211026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.211130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.211174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.211309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.211341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.211445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.211476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.211589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.211620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.211730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.211761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.211873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.211903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.212073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.212104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.212217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.212250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.212353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.212386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.212497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.212529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.212630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.212662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.212783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.212820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.212941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.212973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.213089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.213119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.213235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.213268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.213387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.213419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.393 [2024-12-10 00:15:05.213531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.393 [2024-12-10 00:15:05.213564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.393 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.213667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.213699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.213812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.213845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.213958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.213991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.214094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.214126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.214257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.214292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.214402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.214436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.214607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.214638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.214747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.214780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.214974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.215009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.215195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.215228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.215405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.215435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.215542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.215574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.215685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.215716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.215830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.215861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.215979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.216010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.216123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.216154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.216270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.216301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.216468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.216499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.216605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.216636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.216819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.216851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.216974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.217006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.217194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.217234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.217402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.217433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.217531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.217563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.217752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.217784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.217885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.217916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.218085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.218117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.218234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.218266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.218380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.218412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.218542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.218573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.218675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.218706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.218817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.218848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.218957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.218989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.219096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.219127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.219238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.219270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.219529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.219564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.219749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.219780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.219893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.219924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.220033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.220064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.220271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.220303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.220413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.220444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.220545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.220577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.220696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.220727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.220828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.220860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.220969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.221001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.221116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.221146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.221264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.221295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.221409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.221440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.221547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.221583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.221750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.221781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.221894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.221926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.222111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.222142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.222257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.222289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.222468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.222499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.222605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.222634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.222806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.222835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.223016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.223045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.223169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.223200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.223313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.223342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.223464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.223494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.223610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.223639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.223749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.223779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.224003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.224034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.224137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.224177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.224295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.224326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.224562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.224593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.224787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.224819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.224935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.224964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.225131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.225252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.225384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.225414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.225524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.225553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.225718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.225748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.394 [2024-12-10 00:15:05.225862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.394 [2024-12-10 00:15:05.225892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.394 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.226017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.226047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.226147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.226190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.226369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.226399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.226518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.226548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.226651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.226680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.226797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.226828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.226994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.227024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.227222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.227254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.227439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.227470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.227590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.227622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.227726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.227756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.227926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.227957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.228068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.228099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.228312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.228345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.228512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.228544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.228658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.228696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.228811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.228842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.229013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.229044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.229146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.229186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.229310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.229342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.229454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.229485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.229587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.229617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.229733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.229764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.229877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.229908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.230025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.230056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.230175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.230207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.230318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.230351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.230466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.230498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.230666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.230699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.230807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.230838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.230940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.230971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.231138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.231182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.231286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.231317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.231428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.231458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.231565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.231596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.231707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.231738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.231842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.231873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.231981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.232013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.232134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.232186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.232299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.232331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.232434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.232465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.232575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.232607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.232806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.232837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.232949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.232979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.233095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.233126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.233253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.233295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.233401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.233432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.233529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.233559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.233678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.233710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.233876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.233906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.234007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.234039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.234329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.234364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.234489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.234521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.234700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.234730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.234832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.234864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.234968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.235011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.235113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.235144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.235263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.235294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.235410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.235441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.235557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.235587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.235700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.235731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.235850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.235880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.236050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.236080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.236244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.236276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.236382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.236413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.236527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.236558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.236752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.236782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.236955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.236986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.237088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.237118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.237318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.237353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.237456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.237487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.237608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.237639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.237743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.237774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.395 [2024-12-10 00:15:05.237883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.395 [2024-12-10 00:15:05.237914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.395 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.238081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.238113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.238292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.238324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.238538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.238569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.238693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.238724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.238847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.238877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.239049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.239080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.239182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.239214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.239320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.239351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.239531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.239568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.239736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.239767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.239938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.239969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.240066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.240097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.240276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.240308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.240422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.240453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.240573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.240604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.240782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.240814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.240917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.240947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.241059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.241090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.241202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.241233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.241336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.241367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.241467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.241499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.241602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.241633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.241738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.241768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.241889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.241921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.242019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.242050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.242253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.242285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.242392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.242423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.242537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.242568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.242690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.242721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.242961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.242992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.243115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.243146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.243268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.243299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.243421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.243452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.243556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.243587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.243693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.243724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.243832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.243862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.243962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.243993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.244115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.244146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.244331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.244362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.244473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.244504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.244679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.244710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.244812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.244843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.244949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.244979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.245146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.245214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.245332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.245364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.245471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.245502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.245677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.245709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.245824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.245855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.245963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.245999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.246113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.246144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.246262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.246294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.246471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.246502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.246669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.246699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.246801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.246833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.246934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.246965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.247154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.247198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.247319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.247351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.247634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.247665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.247838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.247869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.247987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.248017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.248125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.248156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.248285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.248317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.248489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.248520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.248689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.248720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.248837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.248868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.249050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.249081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.249207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.249240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.249350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.249382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.396 [2024-12-10 00:15:05.249619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.396 [2024-12-10 00:15:05.249650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.396 qpair failed and we were unable to recover it. 00:33:30.397 [2024-12-10 00:15:05.249754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-12-10 00:15:05.249786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-12-10 00:15:05.249956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-12-10 00:15:05.249988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-12-10 00:15:05.250180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-12-10 00:15:05.250213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.397 [2024-12-10 00:15:05.250315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.397 [2024-12-10 00:15:05.250346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.397 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.250527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.250559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.250731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.250762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.250878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.250908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.251019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.251050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.251151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.251188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.251373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.251404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.251600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.251632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.251753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.251782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.251892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.251923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.252101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.252133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.252265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.252298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.252489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.252520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.252626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.252657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.252761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.252793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.252909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.252940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.253123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.253186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.253366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.253398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.253503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.253534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.253640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.253671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.253775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.253807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.253941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.253972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.254078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.254110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.254254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.254287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.254482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.254514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.254617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.254648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.254820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.254851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.255019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.255050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.683 [2024-12-10 00:15:05.255169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.683 [2024-12-10 00:15:05.255202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.683 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.255314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.255345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.255450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.255481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.255577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.255609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.255816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.255847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.256015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.256047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.256208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.256242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.256356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.256388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.256583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.256615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.256728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.256760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.256933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.256965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.257142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.257182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.257291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.257322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.257435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.257467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.257576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.257608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.257849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.257920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.258117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.258152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.258296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.258328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.258496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.258529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.258719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.258750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.258861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.258892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.258993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.259023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.259126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.259169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.259346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.259377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.259480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.259511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.259625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.259656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.259757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.259788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.259905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.259936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.260106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.260138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.260272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.260303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.260548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.260580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.260682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.260713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.260905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.260937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.261108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.261138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.261343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.261376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.261473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.261504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.261623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.261654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.261761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.261791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.684 [2024-12-10 00:15:05.261900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.684 [2024-12-10 00:15:05.261931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.684 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.262097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.262129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.262331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.262402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.262596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.262633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.262757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.262793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.262968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.262999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.263181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.263214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.263322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.263353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.263530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.263562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.263684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.263716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.263816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.263848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.263973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.264004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.264113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.264145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.264334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.264366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.264469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.264500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.264667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.264699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.264811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.264843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.264948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.264985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.265155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.265197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.265368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.265401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.265646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.265678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.265782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.265813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.266004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.266036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.266179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.266212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.266396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.266428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.266540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.266571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.266737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.266769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.266935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.266966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.267133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.267175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.267368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.267400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.267638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.267670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.267843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.267875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.268058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.268090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.268355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.268388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.268494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.268526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.268642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.268674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.268789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.268820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.268936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.268968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.269225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.269258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.685 [2024-12-10 00:15:05.269431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.685 [2024-12-10 00:15:05.269462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.685 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.269589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.269621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.269724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.269756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.269924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.269955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.270122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.270154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.270294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.270330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.270443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.270473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.270641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.270672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.270840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.270871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.271005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.271037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.271277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.271310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.271503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.271534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.271651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.271682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.271850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.271882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.271995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.272026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.272229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.272262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.272465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.272497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.272606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.272637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.272748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.272780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.272892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.272923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.273168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.273200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.273321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.273353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.273543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.273575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.273766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.273797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.273903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.273933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.274114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.274146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.274269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.274300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.274468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.274499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.274681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.274713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.274825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.274855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.274969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.275000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.275178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.275211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.275403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.275441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.275546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.275594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.275759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.275791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.275973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.276003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.276107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.276138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.276258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.276290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.686 [2024-12-10 00:15:05.276422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.686 [2024-12-10 00:15:05.276453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.686 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.276652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.276683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.276810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.276841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.277006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.277036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.277213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.277245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.277412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.277444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.277579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.277610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.277872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.277903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.278014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.278046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.278170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.278202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.278302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.278333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.278547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.278578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.278687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.278717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.278847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.278878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.279051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.279083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.279247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.279279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.279376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.279407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.279569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.279600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.279714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.279746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.279911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.279941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.280121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.280152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.280282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.280319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.280506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.280537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.280725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.280756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.280867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.280900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.281086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.281117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.281294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.281326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.281434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.281465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.281648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.281679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.281846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.281878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.282069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.282100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.282276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.282308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.282551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.282582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.282748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.687 [2024-12-10 00:15:05.282779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.687 qpair failed and we were unable to recover it. 00:33:30.687 [2024-12-10 00:15:05.282891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.282922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.283119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.283152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.283277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.283308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.283418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.283450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.283612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.283643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.283747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.283778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.283945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.283976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.284177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.284210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.284386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.284417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.284530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.284562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.284727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.284759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.284880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.284911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.285074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.285105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.285283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.285317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.285410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.285441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.285707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.285738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.285857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.285889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.286070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.286101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.286231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.286264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.286434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.286466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.286631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.286661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.286778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.286810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.286997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.287029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.287125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.287168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.287283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.287315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.287491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.287523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.287701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.287732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.287997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.288028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.288197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.288267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.288535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.288570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.288750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.288782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.288900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.288931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.289048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.289080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.289248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.289281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.289413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.289444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.289563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.289594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.289710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.289741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.289860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.688 [2024-12-10 00:15:05.289892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.688 qpair failed and we were unable to recover it. 00:33:30.688 [2024-12-10 00:15:05.290059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.290090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.290206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.290238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.290426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.290458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.290562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.290598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.290705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.290736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.290989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.291021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.291204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.291236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.291355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.291387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.291548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.291580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.291688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.291719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.291881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.291912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.292082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.292113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.292236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.292268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.292447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.292478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.292593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.292625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.292807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.292843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.293036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.293067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.293238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.293271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.293533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.293564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.293746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.293777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.293894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.293925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.294146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.294197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.294412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.294445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.294559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.294590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.294771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.294803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.294905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.294936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.295040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.295072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.295212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.295245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.295411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.295442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.295556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.295588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.295788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.295821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.296008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.296040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.296230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.296263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.296381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.296412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.296535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.296567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.296731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.296762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.296859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.296890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.297010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.689 [2024-12-10 00:15:05.297041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.689 qpair failed and we were unable to recover it. 00:33:30.689 [2024-12-10 00:15:05.297143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.297187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.297301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.297333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.297543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.297575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.297694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.297725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.297913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.297944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.298053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.298091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.298191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.298223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.298461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.298492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.298682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.298714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.298880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.298911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.299044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.299075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.299257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.299290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.299408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.299439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.299541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.299572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.299763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.299795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.299989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.300020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.300192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.300224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.300422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.300452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.300569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.300600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.300801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.300833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.301030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.301062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.301171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.301203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.301401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.301432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.301669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.301699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.301875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.301906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.302079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.302110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.302225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.302256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.302381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.302412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.302525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.302556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.302762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.302793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.302901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.302932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.303110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.303142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.303354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.303386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.303627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.303658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.303844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.303875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.304002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.304033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.304145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.304192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.304296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.690 [2024-12-10 00:15:05.304328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.690 qpair failed and we were unable to recover it. 00:33:30.690 [2024-12-10 00:15:05.304527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.304558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.304660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.304691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.304867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.304898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.305015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.305046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.305214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.305247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.305464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.305496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.305693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.305725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.305837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.305874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.306069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.306100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.306277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.306309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.306413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.306444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.306635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.306666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.306835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.306866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.306970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.307002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.307103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.307135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.307311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.307343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.307530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.307561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.307667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.307700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.307962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.307993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.308191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.308224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.308431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.308462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.308667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.308698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.308883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.308914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.309076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.309107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.309284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.309317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.309487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.309519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.309634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.309665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.309785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.309816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.309983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.310015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.310130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.310171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.310294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.310325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.310426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.310458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.310632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.310663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.310773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.310804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.310994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.311031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.311204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.311236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.311427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.311458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.311693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.311724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.691 qpair failed and we were unable to recover it. 00:33:30.691 [2024-12-10 00:15:05.311892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.691 [2024-12-10 00:15:05.311923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.312091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.312122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.312319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.312352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.312540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.312571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.312747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.312779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.312880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.312911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.313103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.313134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.313330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.313362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.313559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.313590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.313743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.313774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.313889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.313921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.314038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.314069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.314332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.314365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.314533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.314565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.314729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.314760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.314950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.314981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.315145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.315187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.315381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.315412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.315581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.315618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.315746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.315777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.315878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.315909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.316012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.316043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.316152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.316216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.316391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.316423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.316525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.316555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.316658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.316690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.316852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.316883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.317068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.317099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.317307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.317340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.317458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.317490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.317612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.317643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.317760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.317791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.317900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.317932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.318179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.318211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.692 [2024-12-10 00:15:05.318401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.692 [2024-12-10 00:15:05.318433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.692 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.318539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.318571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.318671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.318707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.318896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.318927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.319112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.319143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.319328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.319360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.319528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.319559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.319662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.319693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.319898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.319929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.320101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.320133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.320293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.320325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.320446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.320477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.320646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.320676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.320804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.320836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.320951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.320983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.321175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.321207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.321381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.321412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.321519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.321550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.321654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.321684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.321943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.321974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.322076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.322108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.322299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.322330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.322518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.322550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.322724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.322756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.322919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.322950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.323073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.323104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.323225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.323258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.323377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.323409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.323522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.323553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.323661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.323692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.323816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.323847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.323963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.323994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.324169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.324202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.324307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.324339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.324507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.324539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.324705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.324735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.324904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.324935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.325131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.325171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.693 [2024-12-10 00:15:05.325410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.693 [2024-12-10 00:15:05.325440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.693 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.325621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.325652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.325853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.325885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.325994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.326025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.326191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.326229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.326427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.326459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.326714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.326746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.326867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.326897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.326997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.327029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.327128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.327167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.327337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.327368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.327482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.327514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.327699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.327730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.327966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.327997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.328110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.328141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.328269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.328301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.328471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.328502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.328621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.328652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.328769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.328801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.328971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.329001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.329124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.329156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.329267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.329298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.329416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.329447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.329549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.329580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.329770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.329802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.329904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.329934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.330091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.330123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.330372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.330404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.330527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.330558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.330818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.330849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.331042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.331074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.331196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.331229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.331349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.331380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.331482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.331513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.331693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.331724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.331917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.331949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.332069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.332100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.332280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.332312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.332535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.694 [2024-12-10 00:15:05.332566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.694 qpair failed and we were unable to recover it. 00:33:30.694 [2024-12-10 00:15:05.332666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.332697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.332800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.332831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.333013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.333049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.333287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.333319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.333577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.333607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.333791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.333827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.333942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.333973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.334138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.334180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.334296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.334326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.334493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.334524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.334704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.334735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.334852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.334883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.334985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.335016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.335117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.335148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.335284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.335315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.335428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.335459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.335624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.335655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.335842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.335873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.336055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.336086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.336196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.336228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.336404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.336435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.336535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.336566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.336729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.336760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.336873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.336904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.337071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.337102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.337320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.337353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.337468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.337499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.337694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.337726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.337891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.337922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.338106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.338137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.338338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.338370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.338474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.338505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.338696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.338727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.338931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.338962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.339174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.339208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.339454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.339486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.339600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.339633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.339831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.339863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.339987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.695 [2024-12-10 00:15:05.340018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.695 qpair failed and we were unable to recover it. 00:33:30.695 [2024-12-10 00:15:05.340133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.340176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.340291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.340322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.340435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.340466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.340654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.340686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.340805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.340836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.340954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.340985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.341098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.341135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.341329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.341360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.341470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.341501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.341661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.341694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.341826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.341857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.342093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.342125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.342241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.342273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.342459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.342490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.342597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.342629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.342729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.342761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.342863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.342894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.343101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.343132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.343382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.343415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.343582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.343613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.343900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.343931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.344048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.344080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.344252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.344284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.344417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.344448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.344642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.344673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.344775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.344806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.344987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.345019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.345195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.345227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.345343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.345375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.345488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.345519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.345636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.345667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.345758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.345788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.345948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.345980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.696 [2024-12-10 00:15:05.346175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.696 [2024-12-10 00:15:05.346208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.696 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.346446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.346477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.346609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.346640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.346750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.346783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.346997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.347029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.347199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.347232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.347431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.347462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.347570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.347601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.347789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.347822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.347956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.347987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.348098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.348130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.348312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.348344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.348457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.348489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.348599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.348636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.348736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.348767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.348959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.348991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.349184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.349216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.349396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.349427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.349598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.349629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.349795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.349826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.349931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.349962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.350068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.350099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.350233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.350266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.350366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.350397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.350495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.350527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.350795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.350826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.350942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.350973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.351085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.351117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.351251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.351284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.351455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.351490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.351662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.351694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.351795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.351827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.351934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.351965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.352154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.352196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.352375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.352407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.352577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.352610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.352712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.352744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.352938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.352970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.353081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.353113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.697 [2024-12-10 00:15:05.353255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.697 [2024-12-10 00:15:05.353288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.697 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.353466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.353498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.353601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.353632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.353899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.353931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.354100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.354132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.354248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.354279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.354396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.354427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.354611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.354643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.354829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.354860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.355081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.355113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.355247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.355280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.355394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.355425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.355645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.355676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.355874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.355905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.356007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.356044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.356235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.356268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.356384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.356416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.356532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.356564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.356676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.356707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.356874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.356905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.357077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.357111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.357238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.357269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.357506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.357538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.357674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.357705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.357808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.357840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.358005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.358036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.358138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.358180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.358287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.358319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.358508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.358540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.358722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.358754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.358868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.358899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.359013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.359044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.359249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.359281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.359383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.359413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.359607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.359639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.359812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.359844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.359959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.359990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.360111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.360143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.360289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.360321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.698 qpair failed and we were unable to recover it. 00:33:30.698 [2024-12-10 00:15:05.360430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.698 [2024-12-10 00:15:05.360461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.360681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.360712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.360846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.360877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.360976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.361008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.361141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.361184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.361286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.361317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.361419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.361451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.361640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.361672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.361776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.361807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.361915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.361946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.362051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.362082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.362199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.362232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.362448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.362479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.362591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.362623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.362883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.362914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.363084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.363121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.363251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.363284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.363466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.363497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.363600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.363632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.363753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.363784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.363885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.363916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.364017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.364049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.364176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.364209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.364323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.364354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.364483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.364514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.364684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.364715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.364846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.364877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.364977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.365008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.365110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.365141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.365338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.365370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.365473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.365504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.365683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.365712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.365816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.365845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.366018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.366047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.366177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.366208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.366324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.366354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.366462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.366491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.366660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.366687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.366907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.699 [2024-12-10 00:15:05.366933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.699 qpair failed and we were unable to recover it. 00:33:30.699 [2024-12-10 00:15:05.367033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.367060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.367178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.367206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.367406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.367434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.367613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.367640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.367742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.367769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.367863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.367889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.368002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.368029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.368136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.368175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.368273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.368299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.368417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.368443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.368543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.368569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.368663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.368691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.368800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.368826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.368917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.368944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.369066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.369094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.369265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.369293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.369390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.369423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.369533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.369560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.369655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.369682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.369839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.369867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.369982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.370009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.370101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.370128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.370243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.370270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.370463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.370491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.370590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.370617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.370733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.370760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.370923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.370950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.371059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.371086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.371185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.371213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.371321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.371348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.371449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.371476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.371579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.371605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.371705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.371732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.371833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.371860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.371969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.371997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.372165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.372193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.372356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.372382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.372506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.372533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.372641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.700 [2024-12-10 00:15:05.372669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.700 qpair failed and we were unable to recover it. 00:33:30.700 [2024-12-10 00:15:05.372769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.372798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.372906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.372935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.373108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.373139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.373267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.373296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.373528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.373598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.373779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.373848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.373974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.374008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.374109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.374141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.374328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.374360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.374467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.374497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.374621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.374652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.374834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.374865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.374969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.374999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.375114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.375145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.375258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.375289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.375404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.375433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.375604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.375636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.375872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.375903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.376080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.376111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.376234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.376275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.376389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.376419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.376524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.376554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.376655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.376685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.376800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.376832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.376939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.376970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.377080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.377110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.377236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.377267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.377367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.377397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.377605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.377636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.377745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.377775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.377877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.377907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.378015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.378053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.378242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.378275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.378392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.378423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.378534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.378564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.701 qpair failed and we were unable to recover it. 00:33:30.701 [2024-12-10 00:15:05.378731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.701 [2024-12-10 00:15:05.378762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.378886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.378917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.379037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.379067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.379183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.379216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.379321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.379352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.379524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.379554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.379666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.379696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.379862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.379892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.380019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.380049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.380175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.380207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.380323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.380353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.380521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.380552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.380725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.380755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.380872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.380901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.381025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.381055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.381177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.381207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.381312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.381342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.381449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.381480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.381665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.381695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.381794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.381823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.381959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.381988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.382097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.382126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.382305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.382376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.382555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.382596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.382771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.382801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.382906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.382937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.383043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.383075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.383193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.383225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.383339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.383369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.383558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.383590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.383698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.383729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.383899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.383931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.384035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.384066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.384178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.384210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.384315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.384346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.384449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.384480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.384711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.384741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.384846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.384877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.385049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.385080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.702 [2024-12-10 00:15:05.385201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.702 [2024-12-10 00:15:05.385234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.702 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.385419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.385450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.385580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.385611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.385777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.385808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.385983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.386014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.386123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.386154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.386347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.386377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.386498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.386535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.386654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.386687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.386818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.386848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.386950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.386980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.387111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.387171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.387287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.387321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.387419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.387451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.387562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.387593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.387757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.387788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.387895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.387927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.388044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.388075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.388189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.388223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.388399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.388431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.388536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.388567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.388670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.388702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.388869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.388901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.389005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.389037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.389220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.389262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.389504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.389535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.389640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.389671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.389846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.389878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.390069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.390100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.390227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.390259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.390371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.390402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.390518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.390549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.390653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.390684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.390793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.390824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.390935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.390965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.391086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.391118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.391372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.391405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.391524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.391555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.391681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.391712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.703 [2024-12-10 00:15:05.391833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.703 [2024-12-10 00:15:05.391865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.703 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.392028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.392061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.392179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.392213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.392330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.392365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.392463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.392493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.392681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.392714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.392906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.392937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.393044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.393075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.393182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.393216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.393329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.393361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.393534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.393565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.393667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.393699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.393847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.393918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.394038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.394072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.394194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.394227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.394352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.394383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.394489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.394520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.394657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.394688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.394860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.394892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.395075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.395106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.395284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.395317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.395435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.395467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.395584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.395616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.395729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.395761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.395871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.395903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.396004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.396042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.396209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.396241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.396372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.396404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.396509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.396541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.396648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.396679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.396873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.396905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.397005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.397037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.397149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.397189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.397357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.397389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.397567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.397599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.397773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.397804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.397969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.398001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.398130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.398172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.398288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.398319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.704 [2024-12-10 00:15:05.398446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.704 [2024-12-10 00:15:05.398478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.704 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.398592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.398624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.398727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.398758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.398928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.398959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.399062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.399094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.399222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.399254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.399372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.399404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.399655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.399687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.399805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.399836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.399941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.399972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.400077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.400108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.400232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.400263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.400436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.400468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.400608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.400653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.400832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.400866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.400988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.401021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.401197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.401230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.401354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.401386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.401604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.401636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.401740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.401771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.401888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.401918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.402095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.402127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.402324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.402361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.402467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.402498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.402706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.402737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.402927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.402959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.403138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.403190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.403305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.403335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.403433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.403465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.403570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.403601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.403769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.403800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.403970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.404002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.404117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.404148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.404271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.404302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.404419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.705 [2024-12-10 00:15:05.404450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.705 qpair failed and we were unable to recover it. 00:33:30.705 [2024-12-10 00:15:05.404553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.404586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.404698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.404728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.404897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.404929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.405042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.405074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.405242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.405273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.405453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.405485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.405607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.405638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.405750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.405782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.405884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.405915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.406039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.406070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.406183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.406215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.406320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.406350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.406517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.406549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.406655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.406687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.406927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.406958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.407205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.407238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.407352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.407383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.407491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.407522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.407758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.407828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.407969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.408004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.408125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.408176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.408286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.408317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.408508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.408540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.408808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.408840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.409033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.409065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.409188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.409222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.409323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.409353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.409544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.409576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.409741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.409773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.410010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.410041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.410209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.410240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.410419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.410450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.410625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.410655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.410761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.410793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.410907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.410937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.411044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.411075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.411274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.411307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.411475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.411507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.411626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.411657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.411757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.411789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.411922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.411953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.706 [2024-12-10 00:15:05.412140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.706 [2024-12-10 00:15:05.412181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.706 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.412295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.412327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.412428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.412460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.412628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.412659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.412907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.412944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.413141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.413187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.413323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.413355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.413458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.413490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.413597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.413629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.413730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.413761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.413927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.413959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.414179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.414214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.414321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.414352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.414524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.414555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.414672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.414705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.414893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.414925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.415039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.415070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.415261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.415295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.415422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.415455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.415640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.415671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.415855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.415886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.416053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.416084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.416251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.416284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.416474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.416505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.416610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.416641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.416812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.416843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.417096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.417127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.417257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.417290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.417527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.417559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.417676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.417707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.417886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.417918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.418020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.418058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.418176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.418209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.418314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.418346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.418539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.418571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.418698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.418730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.418848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.418879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.419058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.419090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.419287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.419320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.419433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.419464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.419657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.419690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.419889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.419921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.707 [2024-12-10 00:15:05.420108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.707 [2024-12-10 00:15:05.420139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.707 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.420323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.420354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.420525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.420557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.420732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.420764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.420880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.420912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.421012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.421043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.421234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.421266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.421383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.421414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.421577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.421609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.421771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.421802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.421996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.422028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.422199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.422233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.422417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.422448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.422564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.422595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.422761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.422794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.422963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.422994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.423096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.423133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.423312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.423346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.423463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.423494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.423595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.423628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.423748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.423779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.423877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.423909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.424075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.424107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.424212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.424244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.424507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.424540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.424652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.424684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.424877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.424909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.425006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.425038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.425275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.425308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.425409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.425440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.425562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.425594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.425769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.425801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.425901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.425932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.426040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.426072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.426269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.426302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.426468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.426500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.426705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.426738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.426918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.426950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.427117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.427148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.427305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.427337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.427507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.427539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.427657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.427688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.427935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.427965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.708 qpair failed and we were unable to recover it. 00:33:30.708 [2024-12-10 00:15:05.428084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.708 [2024-12-10 00:15:05.428115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.428257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.428289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.428498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.428530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.428643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.428675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.428781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.428812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.429011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.429043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.429213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.429248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.429347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.429377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.429644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.429676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.429878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.429910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.430076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.430107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.430233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.430266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.430432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.430464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.430645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.430676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.430846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.430883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.431051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.431082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.431249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.431280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.431392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.431422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.431538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.431569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.431734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.431765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.432005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.432036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.432213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.432244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.432358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.432390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.432562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.432593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.432779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.432810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.432922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.432953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.433120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.433151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.433355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.433386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.433558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.433590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.433756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.433787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.433969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.434000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.434192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.434224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.434343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.434375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.434545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.434575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.434697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.434728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.434918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.434949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.435120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.435151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.435276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.435307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.435496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.709 [2024-12-10 00:15:05.435527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.709 qpair failed and we were unable to recover it. 00:33:30.709 [2024-12-10 00:15:05.435644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.435676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.435780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.435812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.435999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.436035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.436251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.436283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.436476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.436508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.436683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.436715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.436837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.436868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.436993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.437025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.437149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.437190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.437406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.437437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.437544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.437575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.437743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.437775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.437902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.437933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.438028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.438058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.438290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.438323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.438493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.438524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.438705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.438737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.438907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.438938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.439039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.439070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.439269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.439301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.439401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.439432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.439619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.439650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.439771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.439801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.439898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.439930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.440041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.440073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.440188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.440220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.440415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.440446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.440630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.440661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.440759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.440790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.440913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.440952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.441050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.441081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.441184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.441214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.441422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.441454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.441647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.441678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.441794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.441825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.442007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.442039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.442225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.442258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.442426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.442457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.442566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.442597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.442835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.710 [2024-12-10 00:15:05.442867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.710 qpair failed and we were unable to recover it. 00:33:30.710 [2024-12-10 00:15:05.443036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.443067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.443246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.443279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.443395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.443427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.443603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.443634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.443893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.443925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.444093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.444125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.444252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.444284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.444392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.444423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.444521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.444553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.444716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.444747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.444859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.444890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.445057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.445088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.445348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.445380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.445638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.445668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.445859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.445890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.446060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.446091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.446206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.446239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.446412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.446444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.446609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.446640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.446807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.446839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.446950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.446981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.447182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.447213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.447329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.447360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.447525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.447556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.447721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.447752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.447865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.447902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.447999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.448030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.448197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.448229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.448412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.448444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.448608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.448638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.448870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.448940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.449177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.449214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.449324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.449356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.449523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.449554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.449720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.449750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.449886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.711 [2024-12-10 00:15:05.449918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.711 qpair failed and we were unable to recover it. 00:33:30.711 [2024-12-10 00:15:05.450104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.450135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.450279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.450311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.450437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.450468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.450572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.450603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.450769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.450800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.450995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.451025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.451189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.451221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.451326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.451367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.451478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.451508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.451690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.451721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.451819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.451849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.452036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.452068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.452174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.452207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.452376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.452408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.452573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.452604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.452772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.452803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.452933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.452964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.453196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.453228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.453353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.453384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.453549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.453580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.453746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.453777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.453948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.453979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.454089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.454120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.454338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.454370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.454632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.454663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.454846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.454876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.455053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.455083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.455250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.455284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.455506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.455536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.455708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.455739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.455928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.455959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.456139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.456178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.456296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.456327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.456436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.456466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.456717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.456787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.457004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.457039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.457234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.457274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.457450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.457482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.457600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.457632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.457762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.457795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.457904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.457936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.458142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.458185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.458291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.458322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.712 [2024-12-10 00:15:05.458428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.712 [2024-12-10 00:15:05.458459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.712 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.458644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.458676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.458807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.458840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.459011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.459043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.459146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.459198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.459314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.459346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.459528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.459560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.459669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.459701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.459868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.459899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.460086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.460120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.460293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.460326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.460544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.460576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.460813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.460846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.460962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.460994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.461193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.461233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.461354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.461385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.461491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.461523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.461732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.461763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.461880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.461911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.462022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.462053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.462254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.462289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.462459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.462491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.462610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.462643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.462763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.462795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.462892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.462924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.463092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.463125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.463243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.463277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.463387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.463421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.463616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.463649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.463902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.463934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.464100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.464133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.464268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.464301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.464473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.464507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.464676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.464708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.464881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.464914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.465091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.465124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.465332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.465369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.465486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.465517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.465702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.465733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.465927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.465959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.466060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.466092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.466220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.466262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.466441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.466473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.466591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.466623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.466755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.466786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.713 [2024-12-10 00:15:05.466894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.713 [2024-12-10 00:15:05.466926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.713 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.467090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.467123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.467303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.467336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.467445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.467474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.467641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.467673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.467783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.467815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.467917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.467949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.468073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.468104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.468370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.468403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.468510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.468541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.468667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.468703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.468875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.468906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.469125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.469156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.469358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.469390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.469508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.469540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.469772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.469804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.469910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.469941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.470047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.470077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.470200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.470245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.470487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.470521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.470756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.470787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.470962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.470993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.471105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.471136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.471326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.471359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.471458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.471490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.471732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.471764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.471867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.471905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.472021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.472053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.472179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.472213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.472338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.472372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.472553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.472585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.472763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.472795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.472982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.473013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.473189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.473223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.473410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.473441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.473556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.473588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.473695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.473727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.473899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.473931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.474046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.474078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.474246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.474282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.474412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.474444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.474563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.474594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.474764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.474796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.474975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.475007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.714 qpair failed and we were unable to recover it. 00:33:30.714 [2024-12-10 00:15:05.475204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.714 [2024-12-10 00:15:05.475237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.475340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.475372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.475543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.475576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.475747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.475779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.476022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.476054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.476220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.476254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.476393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.476426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.476551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.476585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.476719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.476750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.476875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.476908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.477101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.477134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.477260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.477295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.477415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.477447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.477617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.477649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.477761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.477792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.477901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.477934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.478052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.478085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.478202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.478240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.478413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.478445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.478634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.478666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.478776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.478806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.478976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.479010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.479139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.479216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.479344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.479376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.479495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.479525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.479625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.479657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.479755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.479786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.479984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.480017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.480221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.480254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.480423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.480456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.480640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.480671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.480777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.480809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.480937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.480968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.481136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.481175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.481300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.481332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.481434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.481467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.481670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.481702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.481807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.481838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.482004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.482036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.482141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.482192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.482373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.482405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.482515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.482547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.482675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.482706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.482813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.482845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.483010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.483041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.483197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.483231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.483398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.483430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.483626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.483657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.715 qpair failed and we were unable to recover it. 00:33:30.715 [2024-12-10 00:15:05.483844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.715 [2024-12-10 00:15:05.483875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.483986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.484018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.484122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.484154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.484356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.484388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.484516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.484547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.484650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.484681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.484790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.484822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.484944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.484975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.485172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.485206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.485386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.485423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.485528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.485559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.485761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.485793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.485959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.485992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.486111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.486143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.486341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.486381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.486495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.486526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.486705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.486736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.486839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.486871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.486973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.487004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.487119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.487151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.487309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.487342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.487459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.487490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.487595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.487626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.487746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.487778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.487877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.487908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.488072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.488104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.488308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.488341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.488514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.488543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.488648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.488678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.488771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.488799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.488974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.489003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.489112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.489141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.489248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.489278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.489444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.489473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.489576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.489605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.489718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.489746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.489849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.489878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.489988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.490016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.490123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.490151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.490270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.490302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.490467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.490496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.490779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.490808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.490920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.490949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.491045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.491074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.491200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.491231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.491460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.491491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.491586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.491615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.491776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.491804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.491917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.716 [2024-12-10 00:15:05.491946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.716 qpair failed and we were unable to recover it. 00:33:30.716 [2024-12-10 00:15:05.492042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.492072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.492182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.492212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.492388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.492417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.492522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.492550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.492652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.492680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.492842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.492876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.492973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.493002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.493093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.493122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.493338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.493366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.493540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.493569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.493740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.493767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.493932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.493961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.494072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.494101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.494300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.494332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.494429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.494457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.494568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.494596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.494765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.494794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.494888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.494917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.495089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.495118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.495330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.495359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.495617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.495647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.495814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.495843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.495939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.495967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.496170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.496200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.496324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.496353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.496521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.496550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.496645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.496675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.496855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.496884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.496979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.497008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.497114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.497143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.497254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.497283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.497445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.497474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.497646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.497675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.497838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.497867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.498029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.498058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.498176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.498213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.498325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.498353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.498522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.498551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.498737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.498765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.498877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.498906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.499002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.499030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.499202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.499231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.499327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.499355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.499458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.717 [2024-12-10 00:15:05.499486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.717 qpair failed and we were unable to recover it. 00:33:30.717 [2024-12-10 00:15:05.499592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.499621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.499803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.499837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.499997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.500026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.500189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.500219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.500310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.500339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.500521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.500550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.500781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.500809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.500970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.500998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.501176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.501206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.501298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.501326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.501491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.501520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.501625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.501653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.501762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.501791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.501966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.501995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.502098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.502127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.502285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.502355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.502580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.502614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.502812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.502844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.502950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.502981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.503234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.503268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.503460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.503493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.503669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.503700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.503814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.503845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.504011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.504043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.504168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.504202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.504303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.504332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.504447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.504476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.504597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.504626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.504775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.504847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.505040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.505075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.505319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.505353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.505550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.505582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.505699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.505731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.505899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.505929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.506192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.506224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.506418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.506450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.506569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.506600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.506716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.506747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.506916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.506948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.507061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.507092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.507354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.507386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.507554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.507596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.507850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.507881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.508054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.508085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.508202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.508235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.508365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.508397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.508582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.508613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.508793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.508824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.509004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.509036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.509202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.509234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.718 qpair failed and we were unable to recover it. 00:33:30.718 [2024-12-10 00:15:05.509439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.718 [2024-12-10 00:15:05.509469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.509571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.509602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.509707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.509738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.509912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.509943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.510055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.510087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.510267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.510300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.510485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.510516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.510637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.510669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.510833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.510864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.511030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.511061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.511236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.511270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.511440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.511471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.511581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.511612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.511805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.511837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.512005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.512034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.512147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.512192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.512386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.512417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.512538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.512569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.512746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.512816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.512953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.512994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.513104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.513141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.513270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.513303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.513475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.513506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.513674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.513705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.513921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.513952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.514194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.514233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.514455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.514487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.514603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.514634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.514831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.514863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.514988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.515019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.515135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.515179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.515306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.515337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.515448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.515480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.515583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.515614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.515783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.515816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh: line 36: 530537 Killed "${NVMF_APP[@]}" "$@" 00:33:30.719 [2024-12-10 00:15:05.516011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.516045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.516248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.516282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.516454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.516487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.516589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.516621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.516725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:30.719 [2024-12-10 00:15:05.516758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.516870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.516903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.517005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.517037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:30.719 [2024-12-10 00:15:05.517153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.517197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.517298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.517337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:30.719 [2024-12-10 00:15:05.517519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.517552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.517668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.517702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.719 [2024-12-10 00:15:05.517820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.517853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.719 [2024-12-10 00:15:05.518104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.518138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.518335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.518370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.719 [2024-12-10 00:15:05.518479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.719 [2024-12-10 00:15:05.518512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.719 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.518684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.518716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.518843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.518875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.519019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.519052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.519239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.519271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.519380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.519412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.519578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.519609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.519789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.519821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.519946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.519977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.520179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.520211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.520334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.520364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.520488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.520518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.520634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.520665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.520833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.520865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.520998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.521029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.521149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.521190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.521319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.521352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.521536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.521566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.521752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.521784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.521954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.521986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.522174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.522208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.522338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.522369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.522480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.522510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.522626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.522658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.522862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.522892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.523001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.523032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.523137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.523191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.523306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.523337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.523526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.523556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.523660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.523691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.523811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.523842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.523955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.523986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.524096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.524125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.524312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.524351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.524544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=531488 00:33:30.720 [2024-12-10 00:15:05.524576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.524758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.524790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 531488 00:33:30.720 [2024-12-10 00:15:05.524906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.524939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b9 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:30.720 0 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.525067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.525099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.525219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.525253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 531488 ']' 00:33:30.720 [2024-12-10 00:15:05.525493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.525528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.720 [2024-12-10 00:15:05.525641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.525673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.525838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.525868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.526034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.526066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.526180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.720 [2024-12-10 00:15:05.526214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.526385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.526416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.720 [2024-12-10 00:15:05.526593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.526625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:30.720 [2024-12-10 00:15:05.526744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.720 [2024-12-10 00:15:05.526777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.720 qpair failed and we were unable to recover it. 00:33:30.720 [2024-12-10 00:15:05.526955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.526986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.527106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.527137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.527319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.527351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.527450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.527482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.527765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.527797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.527925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.527956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.528127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.528169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.528294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.528330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.528504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.528543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.528671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.528710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.528829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.528860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.528983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.529014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.529131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.529173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.529361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.529392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.529519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.529551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.529658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.529689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.529869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.529900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.530000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.530032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.530322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.530357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.530556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.530591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.530718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.530752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.530862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.530894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.531078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.531112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.531305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.531343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.531541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.531575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.531692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.531722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.531848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.531881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.532085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.532119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.532240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.532274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.532441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.532474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.532643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.532677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.532787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.532819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.532948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.532981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.533154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.533200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.533405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.533439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.533553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.533587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.533699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.533731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.533868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.533900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.534083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.534116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.534298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.534332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.534506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.534539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.534709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.534741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.534864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.534897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.535076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.535108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.535243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.535281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.535396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.721 [2024-12-10 00:15:05.535428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.721 qpair failed and we were unable to recover it. 00:33:30.721 [2024-12-10 00:15:05.535549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.535582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.535688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.535720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.535845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.535884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.536011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.536044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.536182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.536217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.536388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.536422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.536541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.536574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.536679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.536713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.536825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.536858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.537077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.537110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.537291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.537325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.537441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.537475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.537656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.537688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.537804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.537837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.537951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.537985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.538097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.538130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.538261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.538295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.538414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.538448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.538555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.538588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.538779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.538813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.538988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.539021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.539141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.539192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.539309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.539341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.539461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.539493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.539668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.539701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.539877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.539909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.540151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.540197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.540299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.540332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.540504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.540537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.540738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.540773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.540892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.540925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.541036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.541071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.541182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.541216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.541326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.541358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.541550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.541583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.541778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.541810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.541977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.542011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.542118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.542152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.542265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.542298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.542432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.542465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.542637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.542671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.542771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.542803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.542916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.542955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.543062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.543096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.543276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.543314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.543449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.543482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.543673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.543707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.543878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.543910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.544032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.544065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.544196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.544230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.544419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.544453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.544622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.544654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.544843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.722 [2024-12-10 00:15:05.544876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.722 qpair failed and we were unable to recover it. 00:33:30.722 [2024-12-10 00:15:05.544984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.545017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.545119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.545151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.545277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.545310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.545490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.545524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.545701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.545733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.545847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.545880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.545996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.546029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.546207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.546242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.546353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.546387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.546502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.546536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.546720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.546753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.546857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.546889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.547006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.547040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.547230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.547267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.547460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.547494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.547607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.547640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.547778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.547811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.547925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.547959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.548061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.548094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.548204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.548238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.548348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.548382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.548519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.548552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.548724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.548757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.548925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.548958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.549061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.549095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.549268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.549302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.549417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.549450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.549564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.549597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.549714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.549748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.549916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.549954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.550071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.550103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.550297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.550332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.550523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.550556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.550740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.550773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.550880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.550913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.551038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.551071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.551222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.551258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.551428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.551462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.551644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.551678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.551782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.551815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.551948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.551981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.552198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.552234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.552351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.552384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.552559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.552593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.552700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.552733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.552855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.552888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.553074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.553108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.553292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.553327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.553536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.553569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.553787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.553820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.553995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.554029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.554232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.554266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.554371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.554404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.723 [2024-12-10 00:15:05.554508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.723 [2024-12-10 00:15:05.554542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.723 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.554664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.554697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.554814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.554848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.554954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.554988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.555174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.555220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.555395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.555428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.555528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.555561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.555744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.555777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.555952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.555985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.556154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.556221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.556500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.556533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.556779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.556812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.556934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.556968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.557071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.557103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.557231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.557265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.557445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.557479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.557649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.557687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.557799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.557831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.557971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.558004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.558195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.558230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.558354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.558385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.558584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.558617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.558753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.558785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.558891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.558924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.559102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.559134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.559314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.559383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.559617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.559688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.559879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.559915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.560097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.560130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.560362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.560396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.560599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.560632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.560737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.560769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.560886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.560919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.561089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.561121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.561302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.561334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.561454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.561487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.561611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.561644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.561819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.561851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.562096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.562128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.562387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.562458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.724 qpair failed and we were unable to recover it. 00:33:30.724 [2024-12-10 00:15:05.562657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.724 [2024-12-10 00:15:05.562695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.562805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.562839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.562951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.562983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.563176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.563210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.563416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.563449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.563564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.563598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.563720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.563753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.563875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.563908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.564014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.564046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.564220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.564254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.564356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.564389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.564567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.564600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.564844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.564877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.564983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.565015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.565131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.565177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.565333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.565367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.565480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.565520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.565625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.565657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.565825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.565858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.565971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.566003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.566136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.566182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.566377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.566411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.566523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.566555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.566684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.566718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.566828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.566861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.566963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.566997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.567099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.567132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.567261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.567296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.567430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.567462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.567626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.567658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.567928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.567960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.568069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.568102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.568214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.568247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.568411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.568445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.568548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.568581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.568755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.568789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.568903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.568936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.569057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.569091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.569212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.569248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.569416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.569450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.569502] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:33:30.725 [2024-12-10 00:15:05.569543] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.725 [2024-12-10 00:15:05.569551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.569583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.569760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.569790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.569912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.569942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.570130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.570176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.570308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.570339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.570455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.570486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.570587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.570619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.570786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.570819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.570920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.570954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.571134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.571184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.571396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.571429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.571530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.571564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.571749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.571783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.571951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.571983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.572085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.572119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.572261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.572307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.572495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.572530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.572703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.572736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.572849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.572882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.573061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.573094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.573247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.573281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.573387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.573420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.573605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.573638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.573743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.573775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.573894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.573926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.574095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.574128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.574268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.574309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.574492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.574526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.574630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.574663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.574782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.725 [2024-12-10 00:15:05.574815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.725 qpair failed and we were unable to recover it. 00:33:30.725 [2024-12-10 00:15:05.574929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.574960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.575077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.575110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.575239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.575273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.575444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.575477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.575611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.575644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.575817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.575850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.576041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.576072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.576290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.576324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.576432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.576466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.576576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.576609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.576722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.576754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.576934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.576967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.577171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.577205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.577323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.577356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.577467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.577500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.577614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.577646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.577764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.577797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.577970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.578003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.578177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.578211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.578403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.578436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.578554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.578586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.578764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.578796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.578910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.578942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.579139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.579183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.579348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.579381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.579505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.579545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.579719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.579751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.579848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.579880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.580001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.580034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.580144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.580189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.580306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.580340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.580448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.580481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.580649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.580682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.580797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.580831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.581022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.581055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.581227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.581261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.581399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.581434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.581540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.581573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.581675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.581707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.581887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.581920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.582030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.582063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.582231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.582264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.582383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.582416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.582598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.582631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.582755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.582787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.582960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.582993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.583172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.583207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.583337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.583370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.583540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.583573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.583747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.583779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.583883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.583916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.584085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.584118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.584252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.584293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.584510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.584544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.584784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.584817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.584933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.584965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.585075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.585108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.585290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.585324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.585514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.585547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.585718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.585750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.585861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.585893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.585995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.586031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.586135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.586177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.586306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.586336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.586443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.586474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.586588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.586626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.586743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.586773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.586943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.586975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.587089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.587121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.587304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.587338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.587510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.587543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.587728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.587761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.587874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.587907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.726 qpair failed and we were unable to recover it. 00:33:30.726 [2024-12-10 00:15:05.588145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.726 [2024-12-10 00:15:05.588189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.588301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.588332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.588500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.588533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.588724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.588756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.588929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.588962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.589077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.589111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.589327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.589363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.589479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.589511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.589615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.589647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.589763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.589796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.589895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.589926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.590097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.590130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.590315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.590349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.590463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.590495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.590622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.590653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.590825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.590858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.590977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.591010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.591113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.591145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.591338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.591372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.591498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.591540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.591641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.591674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.591785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.591817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.591995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.592028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.592242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.592277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.592481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.592513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.592685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.592718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:30.727 [2024-12-10 00:15:05.592836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:30.727 [2024-12-10 00:15:05.592869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:30.727 qpair failed and we were unable to recover it. 00:33:31.021 [2024-12-10 00:15:05.592990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.021 [2024-12-10 00:15:05.593023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.021 qpair failed and we were unable to recover it. 00:33:31.021 [2024-12-10 00:15:05.593136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.021 [2024-12-10 00:15:05.593183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.021 qpair failed and we were unable to recover it. 00:33:31.021 [2024-12-10 00:15:05.593290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.593323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.593563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.593596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.593699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.593732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.593912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.593947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.594173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.594207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.594375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.594409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.594529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.594563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.594733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.594766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.595011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.595045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.595228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.595263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.595385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.595418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.595523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.595557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.595671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.595704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.595827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.595861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.595964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.595997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.596112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.596145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.596268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.596302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.596469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.596515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.596700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.596733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.596844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.596877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.597000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.597033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.597142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.597185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.597306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.597340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.597512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.597546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.597736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.597770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.597877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.597911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.022 [2024-12-10 00:15:05.598102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.022 [2024-12-10 00:15:05.598136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.022 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.598318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.598352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.598469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.598504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.598617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.598650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.598823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.598856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.598969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.599001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.599204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.599239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.599363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.599396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.599513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.599546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.599662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.599696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.599798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.599832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.599952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.599986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.600088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.600122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.600238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.600294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.600407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.600440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.600609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.600642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.600900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.600933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.601031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.601064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.601175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.601216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.601397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.601431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.601552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.601585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.601693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.601726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.601843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.601876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.602042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.602076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.602254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.602288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.602404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.602438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.602554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.602587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.023 qpair failed and we were unable to recover it. 00:33:31.023 [2024-12-10 00:15:05.602799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.023 [2024-12-10 00:15:05.602832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.603009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.603042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.603172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.603207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.603378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.603411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.603580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.603614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.603740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.603774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.603881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.603914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.604084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.604117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.604310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.604345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.604461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.604494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.604661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.604694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.604863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.604896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.605070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.605103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.605288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.605323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.605591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.605624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.605821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.605853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.605975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.606009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.606115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.606147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.606349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.606388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.606496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.606529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.606647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.606679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.606785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.606818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.607013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.607047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.607191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.607225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.607396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.607430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.607533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.607566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.607734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.024 [2024-12-10 00:15:05.607767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.024 qpair failed and we were unable to recover it. 00:33:31.024 [2024-12-10 00:15:05.607941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.607975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.608173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.608207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.608311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.608345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.608477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.608510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.608615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.608647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.608775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.608808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.608919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.608951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.609071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.609104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.609316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.609350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.609478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.609510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.609617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.609649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.609754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.609786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.609915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.609949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.610141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.610185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.610374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.610407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.610542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.610575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.610708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.610741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.610912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.610945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.611041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.611075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.611187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.611222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.611331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.611364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.611532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.611565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.611770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.611803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.611915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.611947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.612137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.612180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.612391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.612423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.612536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.612569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.612686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.612719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.612905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.612938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.613042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.613075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.613266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.613299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.613396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.613429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.613633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.613679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.613857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.613890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.613993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.614024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.614150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.614197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.025 [2024-12-10 00:15:05.614299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.025 [2024-12-10 00:15:05.614333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.025 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.614538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.614570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.614767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.614800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.614904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.614937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.615106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.615139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.615264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.615298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.615410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.615442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.615613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.615645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.615770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.615802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.615995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.616035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.616272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.616305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.616482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.616515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.616621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.616653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.616822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.616855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.616962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.616994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.617113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.617144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.617284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.617316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.617423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.617456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.617559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.617591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.617799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.617832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.618002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.618035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.618205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.618238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.618341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.618374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.618480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.618512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.618753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.618785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.618976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.619009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.619258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.619292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.619544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.619576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.619788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.619821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.619925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.619959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.620067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.620100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.620288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.620322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.620493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.620527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.620696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.620728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.620908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.620940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.621180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.621214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.621347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.621381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.621552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.621586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.621755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.026 [2024-12-10 00:15:05.621788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.026 qpair failed and we were unable to recover it. 00:33:31.026 [2024-12-10 00:15:05.621957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.621990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.622168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.622202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.622441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.622475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.622584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.622617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.622741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.622774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.622954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.622986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.623087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.623119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.623313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.623349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.623516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.623548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.623720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.623752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.623957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.623996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.624188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.624220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.624340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.624372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.624532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.624565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.624682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.624714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.624878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.624910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.625022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.625056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.625226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.625259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.625378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.625410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.625518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.625550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.625657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.625690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.625802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.625834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.625951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.625983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.626085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.626117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.626330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.626364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.626474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.626507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.626612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.626647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.626915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.626947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.627118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.627152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.627330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.627363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.627485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.627517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.627756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.627788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.627894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.027 [2024-12-10 00:15:05.627926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.027 qpair failed and we were unable to recover it. 00:33:31.027 [2024-12-10 00:15:05.628052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.628085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.628189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.628221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.628412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.628445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.628612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.628645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.628816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.628854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.628959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.628991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.629169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.629203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.629397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.629430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.629531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.629563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.629762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.629795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.629917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.629949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.630062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.630101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.630237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.630270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.630453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.630486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.630608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.630641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.630840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.630873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.631039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.631072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.631338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.631373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.631501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.631534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.631714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.631747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.631858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.631891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.632077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.632110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.632256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.632290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.632515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.632547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.632663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.632697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.632818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.632850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.633029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.633063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.633251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.633284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.633411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.633444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.633564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.633597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.633765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.633797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.633998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.634038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.634212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.634246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.634413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.634446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.634632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.634665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.634839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.634871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.635117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.635150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.635275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.635309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.635411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.028 [2024-12-10 00:15:05.635444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.028 qpair failed and we were unable to recover it. 00:33:31.028 [2024-12-10 00:15:05.635546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.635578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.635744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.635778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.635960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.635993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.636101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.636134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.636322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.636356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.636472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.636506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.636725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.636759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.636930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.636963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.637129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.637174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.637292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.637324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.637431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.637463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.637591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.637623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.637729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.637763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.637931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.637964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.638132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.638177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.638368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.638402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.638574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.638607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.638720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.638752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.638858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.638891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.639001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.639040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.639178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.639212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.639482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.639516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.639685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.639719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.639826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.639858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.639957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.639991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.640108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.640141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.640275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.640308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.640466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.640500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.640702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.640736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.640925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.640957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.641076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.641109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.641231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.641265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.641433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.641465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.641639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.641672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.641883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.641916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.642027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.642060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.642180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.642215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.642342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.642375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.642544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.029 [2024-12-10 00:15:05.642576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.029 qpair failed and we were unable to recover it. 00:33:31.029 [2024-12-10 00:15:05.642742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.642775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.642943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.642976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.643174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.643208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.643385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.643418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.643589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.643621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.643788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.643823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.644025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.644059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.644171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.644210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.644410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.644443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.644546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.644579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.644762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.644795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.644986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.645019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.645191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.645224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.645423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.645457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.645665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.645699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.645813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.645846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.646049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.646082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.646278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.646313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.646544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.646577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.646759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.646793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.646978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.647012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.647123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.647169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.647351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.647383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.647567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.647601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.647794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.647827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.647928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.647962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.648153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.648197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.648368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.648403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.648568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.648602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.648769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.648803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.648972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.649007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.649125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.649167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.649336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.649368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.649487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.649520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.649625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.649659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.649776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.649809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.649981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.650015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.650129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.650214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.030 qpair failed and we were unable to recover it. 00:33:31.030 [2024-12-10 00:15:05.650347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.030 [2024-12-10 00:15:05.650384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.650518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.650554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.650692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.650726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.650848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.650883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.651004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.651037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.651204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.651239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.651350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.651384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.651507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.651540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.651737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.651771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.651883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.651917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.652028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.652062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.652231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.652266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.652450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.652484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.652706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.652739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.652841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.652869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:31.031 [2024-12-10 00:15:05.652875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.653008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.653040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.653205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.653239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.653341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.653375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.653508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.653542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.653721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.653755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.653926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.653960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.654155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.654200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.654322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.654356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.654474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.654514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.654701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.654735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.654851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.654884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.655057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.655090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.655210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.655246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.655435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.655470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.655656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.655689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.655796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.655828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.655943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.655976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.656104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.656137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.656265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.656299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.656510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.031 [2024-12-10 00:15:05.656544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.031 qpair failed and we were unable to recover it. 00:33:31.031 [2024-12-10 00:15:05.656740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.656776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.656958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.656992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.657117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.657150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.657357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.657390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.657509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.657543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.657717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.657753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.657935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.657969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.658087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.658123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.658253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.658289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.658408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.658441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.658550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.658584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.658687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.658721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.658900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.658934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.659040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.659075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.659278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.659313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.659415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.659447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.659616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.659651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.659756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.659791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.659908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.659942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.660111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.660146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.660342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.660375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.660473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.660505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.660672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.660704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.660800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.660831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.660944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.660976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.661174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.661208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.661335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.661366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.661465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.661497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.661663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.661696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.661937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.662011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.662286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.662359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.662509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.662547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.662655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.662689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.662862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.662896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.663012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.663047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.663175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.663219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.663412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.663446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.663554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.663588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.032 [2024-12-10 00:15:05.663709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.032 [2024-12-10 00:15:05.663744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.032 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.663919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.663955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.664068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.664103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.664232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.664268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.664443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.664488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.664594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.664629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.664729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.664762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.664885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.664918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.665036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.665069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.665209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.665243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.665346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.665376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.665490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.665519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.665686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.665715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.665895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.665926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.666052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.666083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.666203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.666233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.666398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.666429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.666529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.666561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.666668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.666698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.666809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.666839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.666954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.666983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.667089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.667123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.667231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.667261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.667358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.667389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.667510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.667540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.667642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.667671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.667776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.667806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.667907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.667938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.668132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.668188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.668310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.668341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.668448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.668479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.668644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.668681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.668787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.668816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.668916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.668946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.669109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.669140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.669259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.669291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.669405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.669435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.669533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.669563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.669663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.033 [2024-12-10 00:15:05.669695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.033 qpair failed and we were unable to recover it. 00:33:31.033 [2024-12-10 00:15:05.669881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.669910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.670021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.670050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.670170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.670201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.670302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.670332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.670453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.670482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.670583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.670617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.670734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.670776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.670889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.670923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.671033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.671068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.671196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.671235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.671356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.671389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.671495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.671528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.671698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.671733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.671906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.671941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.672110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.672146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.672268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.672302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.672410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.672443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.672571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.672611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.672744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.672778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.672888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.672928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.673039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.673073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.673189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.673222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.673395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.673428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.673541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.673574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.673749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.673782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.673956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.673990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.674230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.674264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.674369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.674402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.674524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.674557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.674665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.674700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.674802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.674836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.674943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.674977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.675189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.675231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.675364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.675398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.675563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.675596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.675703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.675736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.675843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.675876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.034 [2024-12-10 00:15:05.676052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.034 [2024-12-10 00:15:05.676086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.034 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.676217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.676253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.676368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.676403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.676511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.676545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.676654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.676689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.676807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.676841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.676949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.676983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.677184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.677218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.677425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.677460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.677650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.677696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.677879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.677912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.678026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.678060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.678175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.678211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.678385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.678419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.678534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.678568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.678676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.678709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.678887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.678920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.679057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.679091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.679230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.679263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.679437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.679471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.679594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.679626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.679747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.679781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.679892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.679934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.680099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.680133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.680325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.680360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.680474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.680508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.680625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.680660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.680768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.680803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.680918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.680951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.681074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.681107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.681248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.681281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.681384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.681417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.681587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.681619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.681743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.681777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.681895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.681928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.682048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.682081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.682212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.682248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.682372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.682407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.035 qpair failed and we were unable to recover it. 00:33:31.035 [2024-12-10 00:15:05.682514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.035 [2024-12-10 00:15:05.682546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.682658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.682691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.682821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.682855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.683033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.683065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.683179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.683215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.683323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.683358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.683460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.683493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.683608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.683642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.683811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.683845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.684047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.684080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.684196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.684229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.684368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.684426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.684605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.684639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.684820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.684853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.684977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.685011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.685123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.685169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.685288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.685322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.685433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.685467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.685640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.685674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.685786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.685820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.685933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.685966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.686081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.686115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.686239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.686275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.686388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.686421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.686532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.686575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.686742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.686774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.686968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.687004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.687250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.687284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.687400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.687434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.687534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.687566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.687674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.687706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.687877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.687910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.688019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.688054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.688171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.036 [2024-12-10 00:15:05.688205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.036 qpair failed and we were unable to recover it. 00:33:31.036 [2024-12-10 00:15:05.688345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.688377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.688482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.688515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.688628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.688661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.688827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.688860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.688974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.689008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.689119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.689151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.689281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.689315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.689486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.689519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.689634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.689667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.689857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.689890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.689996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.690029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.690251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.690286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.690395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.690428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.690538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.690571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.690754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.690788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.690961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.690994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.691116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.691150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.691291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.691329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.691446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.691479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.691577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.691610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.691711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.691744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.691859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.691891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.692071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.692106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.692226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.692262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.692441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.692475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.692581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.692614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.692740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.692773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.692897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.692931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.693036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.693071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.693194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.693230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.693336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.693370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.693553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.693588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.693694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.693727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.693860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.693892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.694016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.694049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.694167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.694202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.694392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.037 [2024-12-10 00:15:05.694424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.037 qpair failed and we were unable to recover it. 00:33:31.037 [2024-12-10 00:15:05.694881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.037 [2024-12-10 00:15:05.694906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.037 [2024-12-10 00:15:05.694914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.037 [2024-12-10 00:15:05.694922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.037 [2024-12-10 00:15:05.694928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.037 [2024-12-10 00:15:05.695640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.695697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.695927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.695962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.696150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.696195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.696322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.696354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.696476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.696510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.696622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.696661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 [2024-12-10 00:15:05.696582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.696766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.696797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 [2024-12-10 00:15:05.696734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.696841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:31.038 [2024-12-10 00:15:05.696905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.696936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 [2024-12-10 00:15:05.696842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.697044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.697075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.697182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.697216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.697388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.697422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.697593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.697626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.697754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.697786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.697893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.697926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.698028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.698061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.698171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.698205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.698394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.698428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.698532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.698566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.698683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.698716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.698816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.698850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.698955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.698988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.699179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.699215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.699345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.699378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.699508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.699541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.699645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.699678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.699799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.699833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.699960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.699994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.700100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.700133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.700338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.700372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.700567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.700600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.700708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.700742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.700864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.700912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.701092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.701127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.701269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.701307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.701429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.701462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.701654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.701689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.701869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.038 [2024-12-10 00:15:05.701904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.038 qpair failed and we were unable to recover it. 00:33:31.038 [2024-12-10 00:15:05.702017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.702051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.702258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.702293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.702416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.702449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.702573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.702607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.702726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.702760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.702871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.702905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.703030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.703063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.703197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.703239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.703350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.703384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.703492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.703525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.703646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.703681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.703853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.703886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.704004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.704037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.704206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.704241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.704359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.704392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.704565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.704598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.704766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.704799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.704918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.704952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.705056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.705087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.705264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.705298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.705407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.705442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.705560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.705593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.705760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.705796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.705925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.705957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.706072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.706105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.706238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.706273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.706384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.706417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.706540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.706572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.706675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.706709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.706883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.706917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.707091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.707125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.039 qpair failed and we were unable to recover it. 00:33:31.039 [2024-12-10 00:15:05.707250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.039 [2024-12-10 00:15:05.707286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.707475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.707508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.707612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.707644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.707840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.707892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.708080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.708120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.708408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.708446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.708565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.708598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.708703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.708736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.708849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.708882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.709054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.709087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.709201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.709236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.709357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.709390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.709503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.709536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.709648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.709682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.709896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.709928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.710032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.710064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.710204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.710240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.710413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.710445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.710560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.710594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.710833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.710866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.710980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.711013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.711198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.711233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.711345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.711378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.711501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.711534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.711640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.711673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.711775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.711807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.711910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.711942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.712079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.712112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.712299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.712333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.712509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.712543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.712744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.712797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.712969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.713002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.713181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.713217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.040 [2024-12-10 00:15:05.713395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.040 [2024-12-10 00:15:05.713428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.040 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.713541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.713574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.713742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.713777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.713913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.713948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.714052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.714086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.714260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.714297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.714414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.714447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.714554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.714586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.714704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.714739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.714933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.714966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.715077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.715110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.715307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.715342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.715539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.715572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.715678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.715712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.715892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.715925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.716031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.716064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.716201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.716235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.716345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.716379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.716488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.716521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.716631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.716665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.716788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.716820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.716935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.716968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.717145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.717189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.717303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.717336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.717444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.717483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.717602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.717635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.717896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.717930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.718099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.718132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.718258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.718294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.718413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.718449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.718557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.718591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.718712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.718745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.718914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.718948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.719049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.719081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.041 qpair failed and we were unable to recover it. 00:33:31.041 [2024-12-10 00:15:05.719194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.041 [2024-12-10 00:15:05.719229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.719342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.719377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.719492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.719526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.719636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.719669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.719874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.719908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.720032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.720066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.720410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.720447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.720561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.720594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.720821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.720855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.720952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.720986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.721101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.721133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.721250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.721285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.721400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.721434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.721546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.721578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.721687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.721722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.721847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.721879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.722001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.722034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.722147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.722206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.722327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.722360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.722540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.722574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.722688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.722721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.722888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.722921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.723055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.723089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.723205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.723240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.723366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.723399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.723513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.723545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.723657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.723692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.723797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.723829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.724031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.724065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.724177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.724212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.724345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.724379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.724658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.724715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.724902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.724935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.725107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.725140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.725359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.725394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.725509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.725541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.725663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.725697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.725802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.725835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.042 qpair failed and we were unable to recover it. 00:33:31.042 [2024-12-10 00:15:05.725941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.042 [2024-12-10 00:15:05.725975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.726174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.726209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.726326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.726360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.726489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.726523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.726625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.726658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.726831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.726865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.727032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.727075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.727256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.727291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.727404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.727439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.727549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.727582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.727685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.727718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.727837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.727870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.727985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.728018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.728192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.728227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.728399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.728433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.728538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.728572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.728747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.728783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.728895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.728929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.729050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.729085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.729327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.729362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.729481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.729515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.729626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.729660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.729776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.729812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.729923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.729957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.730074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.730107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.730232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.730269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.730391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.730427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.730560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.730595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.730708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.730743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.730939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.730974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.731153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.731201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.731312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.731345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.731472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.731507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.731649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.731712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.731926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.731975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.732109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.732152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.732271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.732305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.732411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.732443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.732545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.043 [2024-12-10 00:15:05.732578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.043 qpair failed and we were unable to recover it. 00:33:31.043 [2024-12-10 00:15:05.732731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.732764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.732961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.732994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.733094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.733126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.733251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.733288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.733410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.733442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.733552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.733585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.733697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.733730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.733838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.733870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.734044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.734077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.734189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.734224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.734348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.734382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.734509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.734542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.734648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.734682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.734850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.734884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.734992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.735026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.735146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.735192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.735304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.735337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.735562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.735595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.735764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.735797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.735898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.735932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.736032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.736066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.736178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.736215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.736337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.736369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.736488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.736521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.736642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.736674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.736787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.736820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.736919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.736951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.737060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.737093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.737215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.737249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.737420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.737453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.737562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.737593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.737715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.737747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.737870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.737902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.738006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.738039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.738155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.738212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.738387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.738420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.738516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.738548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.738744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.738779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.738883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.044 [2024-12-10 00:15:05.738915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.044 qpair failed and we were unable to recover it. 00:33:31.044 [2024-12-10 00:15:05.739017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.739052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.739175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.739210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.739390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.739423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.739600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.739633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.739735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.739769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.739976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.740009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.740143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.740189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.740301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.740334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.740506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.740539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.740664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.740697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.740812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.740843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.740970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.741003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.741206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.741241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.741410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.741443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.741561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.741594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.741709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.741742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.741871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.741903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.742009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.742043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.742213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.742246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.742369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.742401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.742593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.742626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.742731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.742763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.742883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.742925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.743039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.743071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.743195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.743230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.743337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.743370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.743479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.743512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.743630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.743663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.743759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.743793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.743901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.743935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.744045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.744078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.744257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.744292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.744529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.744563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.744665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.744698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.045 [2024-12-10 00:15:05.744867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.045 [2024-12-10 00:15:05.744899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.045 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.745018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.745061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.745279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.745315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.745449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.745483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.745597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.745630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.745798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.745830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.745933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.745966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.746075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.746108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.746236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.746270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.746451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.746483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.746661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.746694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.746797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.746830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.746938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.746972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.747149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.747197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.747330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.747364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.747489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.747521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.747640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.747672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.747774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.747806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.747912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.747945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.748056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.748088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.748206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.748241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.748379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.748412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.748530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.748564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.748732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.748766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.748991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.749026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.749146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.749189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.749293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.749326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.749437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.749471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.749604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.749650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.749831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.749867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.750039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.750074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.750192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.750233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.750372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.750407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.750599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.750633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.750760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.750795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.751070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.046 [2024-12-10 00:15:05.751103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.046 qpair failed and we were unable to recover it. 00:33:31.046 [2024-12-10 00:15:05.751238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.751273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.751381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.751415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.751520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.751553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.751673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.751707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.751830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.751865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.752027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.752069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.752179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.752214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.752324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.752358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.752528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.752563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.752682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.752716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.752820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.752854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.753113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.753148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.753285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.753319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.753489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.753523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.753628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.753662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.753775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.753807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.753918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.753953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.754066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.754100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.754238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.754277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.754401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.754434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.754544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.754578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.754691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.754724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.754826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.754859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.755048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.755081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.755194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.755228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.755427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.755461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.755582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.755616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.755747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.755781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.047 [2024-12-10 00:15:05.755895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.047 [2024-12-10 00:15:05.755928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.047 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.756046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.756079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.756206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.756240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.756434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.756467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.756591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.756642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.756862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.756905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.757026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.757058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.757227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.757260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.757362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.757395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.757510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.757541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.757656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.757690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.757800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.757833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.757936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.757968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.758137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.758180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.758295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.758328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.758438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.758471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.758593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.758626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.758739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.758772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.758982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.759016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.759128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.759174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.759281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.759315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.759499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.759530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.759660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.759693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.759804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.759838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.759944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.759977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.760145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.760190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.760290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.760323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.760427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.760458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.760572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.760605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.760725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.760758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.760883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.760917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.761049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.761082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.761193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.761226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.761427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.761461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.761674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.761707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.761812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.761844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.762010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.762044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.762173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.048 [2024-12-10 00:15:05.762208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.048 qpair failed and we were unable to recover it. 00:33:31.048 [2024-12-10 00:15:05.762322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.762356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.762472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.762504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.762678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.762712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.762879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.762911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.763088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.763122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.763323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.763357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.763550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.763588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.763705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.763736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.763845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.763878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.764048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.764081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.764203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.764237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.764343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.764377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.764491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.764522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.764626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.764659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.764830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.764862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.764981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.765014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.765112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.765146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.765280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.765314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.765418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.765451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.765554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.765588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.765755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.765788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.765981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.766015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.766195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.766230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.766330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.766362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.766469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.766501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.766621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.766653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.766819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.766851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.766959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.766992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.767194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.767228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.767352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.767385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.767491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.767523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.767639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.767672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.767805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.767836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.767957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.767991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.768171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.768205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.768387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.768419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.768535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.768568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.768677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.049 [2024-12-10 00:15:05.768710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.049 qpair failed and we were unable to recover it. 00:33:31.049 [2024-12-10 00:15:05.768876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.768908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.769022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.769056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.769167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.769200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.769321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.769354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.769481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.769513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.769648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.769682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.769786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.769818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.769991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.770027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.770208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.770249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.770375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.770407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.770531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.770564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.770760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.770793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.770888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.770920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.771033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.771066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.771174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.771208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.771326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.771358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.771534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.771567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.771703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.771736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.771846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.771878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.771976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.772011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.772194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.772229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.772331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.772366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.772479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.772511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.772611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.772646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.772766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.772799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.772903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.772936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.773042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.773074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.773176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.773209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.773331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.773376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.773486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.773518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.773630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.773663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.773775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.773808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.773921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.050 [2024-12-10 00:15:05.773954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.050 qpair failed and we were unable to recover it. 00:33:31.050 [2024-12-10 00:15:05.774129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.774173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.774285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.774318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.774424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.774457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.774562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.774595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.774766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.774798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.774968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.775001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.775183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.775219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.775320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.775354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.775472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.775504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.775609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.775642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.775811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.775844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.775965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.775997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.776196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.776230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.776333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.776364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.776474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.776506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.776625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.776664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.776767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.776799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.776927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.776961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.777077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.777110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.777229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.777263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.777437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.777470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.777582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.777615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.777722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.777754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.777861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.777895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.778137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.778176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.778305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.778338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.778458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.778490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.778594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.778627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.778817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.778850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.778981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.779015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.779205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.779239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.779345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.779378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.779557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.779589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.779720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.779753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.779877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.779909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.780018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.780052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.780171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.780206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.051 [2024-12-10 00:15:05.780311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.051 [2024-12-10 00:15:05.780344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.051 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.780514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.780546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.780651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.780685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.780800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.780833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.780938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.780970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.781078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.781111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.781246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.781277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.781395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.781424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.781588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.781618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.781727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.781756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.781849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.781878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.782009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.782040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.782141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.782179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.782273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.782304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.782397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.782426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.782528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.782558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.782662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.782692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.782782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.782813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.782925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.782959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.783080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.783111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.783294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.783324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.783427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.783457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.783555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.783585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.783689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.783719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.783826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.783856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.783958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.783988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.784206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.784238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.784413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.784444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.784556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.784586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.784688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.784718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.784812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.784842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.784959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.784991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.785104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.785133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.785248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.785279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.785376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.785406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.785503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.785533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.785632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.785662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.785759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.785788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.052 [2024-12-10 00:15:05.785884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.052 [2024-12-10 00:15:05.785913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.052 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.786020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.786051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.786154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.786193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.786290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.786320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.786483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.786513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.786608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.786638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.786739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.786768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.786938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.786969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.787069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.787098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.787212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.787244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.787342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.787373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.787540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.787570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.787679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.787708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.787805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.787835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.788009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.788038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.788211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.788241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.788356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.788386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.788483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.788512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.788615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.788646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.788761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.788790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.788953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.788987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.789083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.789114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.789231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.789262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.789369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.789399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.789574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.789604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.789841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.789870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.790039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.790070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.790173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.790205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.790303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.790331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.790438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.790469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.790577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.790607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.790713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.790743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.790857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.790886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.790995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.791025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.791133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.791185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.791279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.791306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.053 qpair failed and we were unable to recover it. 00:33:31.053 [2024-12-10 00:15:05.791403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.053 [2024-12-10 00:15:05.791431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.791531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.791558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.791671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.791700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.791805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.791833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.791949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.791976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.792146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.792196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.792358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.792385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.792487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.792514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.792619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.792646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.792807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.792836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.792941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.792969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.793081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.793109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.793222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.793251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.793412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.793440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.793546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.793572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.793732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.793760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.793851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.793878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.794048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.794076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.794186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.794216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.794314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.794341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.794444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.794474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.794588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.794616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.794708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.794737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.794828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.794856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.794956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.794988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.795194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.795223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.795331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.795359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.795455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.795482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.795699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.795727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.795817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.795844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.795937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.795964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.054 [2024-12-10 00:15:05.796069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.054 [2024-12-10 00:15:05.796095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.054 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.796212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.796241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.796430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.796458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.796556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.796583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.796699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.796727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.796829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.796856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.797030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.797057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.797177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.797206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.797319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.797346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.797454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.797481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.797585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.797613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.797716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.797742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.797901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.797928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.798041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.798069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.798178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.798207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.798313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.798340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.798515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.798543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.798655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.798684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.798797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.798826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.798927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.798954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.799052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.799081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.799182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.799211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.799298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.799326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.799432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.799460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.799551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.799580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.799688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.799716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.799816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.799844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.799999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.800027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.055 [2024-12-10 00:15:05.800125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.055 [2024-12-10 00:15:05.800153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.055 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.800277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.800306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.800462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.800490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.800589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.800617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.056 [2024-12-10 00:15:05.800780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.800811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.800904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.800932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:33:31.056 [2024-12-10 00:15:05.801097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.801126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.801222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:31.056 [2024-12-10 00:15:05.801252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.801345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.801373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.056 [2024-12-10 00:15:05.801465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.801494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.801574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.801602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:31.056 [2024-12-10 00:15:05.801781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.801809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.801985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.802011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.802131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.802165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.802268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.802295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.802402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.802428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.802586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.802613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.802800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.802826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.802915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.802940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.803104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.803129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.803221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.803249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.803338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.803365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.803469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.803495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.803588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.803614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.803707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.803732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.803818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.803843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.803945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.803970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.804063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.804091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.804251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.804278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.804389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.804415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.804521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.804547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.804706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.804735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.804825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.804852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.804943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.804969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.805064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.805091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.805196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.805223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.056 [2024-12-10 00:15:05.805382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.056 [2024-12-10 00:15:05.805410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.056 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.805495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.805522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.805619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.805646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.805805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.805832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.805918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.805946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.806032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.806060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.806153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.806187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.806274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.806305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.806401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.806427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.806541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.806566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.806654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.806682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.806767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.806794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.806881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.806908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.806998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.807024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.807118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.807144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.807311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.807338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.807421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.807448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.807545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.807571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.807655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.807684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.807772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.807797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.807979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.808005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.808114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.808140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.808298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.808325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.808430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.808455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.808554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.808581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.808734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.808760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.808857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.808885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.808988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.809014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.809103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.809128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.809224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.809251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.809347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.809374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.809473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.809499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.809656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.809683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.809798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.809824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.809917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.809943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.810033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.810058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.810150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.810185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.810301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.810327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.810476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.810503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.057 [2024-12-10 00:15:05.810588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.057 [2024-12-10 00:15:05.810614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.057 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.810701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.810728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.810828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.810854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.810949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.810974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.811063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.811089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.811192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.811220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.811312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.811339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.811433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.811459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.811619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.811651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.811740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.811780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.811870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.811893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.812087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.812113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.812205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.812230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.812328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.812354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.812439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.812465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.812565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.812598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.812694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.812718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.812884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.812908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.813002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.813027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.813129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.813153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.813243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.813269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.813370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.813394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.813491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.813516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.813617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.813641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.813723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.813747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.813836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.813860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.813950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.813974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.814052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.814078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.814165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.814190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.814284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.814309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.814403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.814427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.814523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.814548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.814635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.814661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.814745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.814769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.814854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.814879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.814975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.815001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.815104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.815128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.815222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.815247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.058 [2024-12-10 00:15:05.815417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.058 [2024-12-10 00:15:05.815442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.058 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.815527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.815552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.815646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.815672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.815761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.815786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.815889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.815914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.816004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.816030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.816212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.816239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.816330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.816356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.816459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.816483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.816570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.816594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.816685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.816714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.816821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.816845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.816995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.817018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.817108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.817132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.817244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.817268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.817353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.817377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.817474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.817499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.817584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.817608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.817697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.817721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.817808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.817831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.817915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.817940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.818035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.818059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.818145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.818183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.818285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.818320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.818498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.818532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.818643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.818676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.818773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.818807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.818914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.818949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.819045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.819070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.819168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.819194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.819285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.819310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.819399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.819423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.819584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.819608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.819694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.819718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.819808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.819832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.819914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.819939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.820022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.820047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.820130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.820155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.820247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.059 [2024-12-10 00:15:05.820272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.059 qpair failed and we were unable to recover it. 00:33:31.059 [2024-12-10 00:15:05.820356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.820381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.820539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.820563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.820651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.820675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.820768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.820792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.820895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.820920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.821068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.821092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.821208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.821233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.821316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.821340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.821428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.821451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.821548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.821572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.821679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.821703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.821806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.821834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.821934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.821959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.822050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.822074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.822196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.822222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.822306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.822329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.822423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.822445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.822541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.822563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.822639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.822661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.822757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.822778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.822862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.822882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.822969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.822991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.823082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.823105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.823193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.823215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.823377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.823399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.823483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.823505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.823585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.823607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.823691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.823712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.823832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.823895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.824086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.824122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.824263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.824301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.824419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.824442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.824545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.824567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.824654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.824675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.060 [2024-12-10 00:15:05.824832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.060 [2024-12-10 00:15:05.824854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.060 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.824948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.824970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.825048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.825069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.825148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.825177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.825259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.825281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.825359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.825381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.825470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.825493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.825573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.825595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.825680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.825703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.825807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.825829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.825975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.825997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.826095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.826117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.826200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.826224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.826315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.826337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.826421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.826443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.826518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.826539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.826617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.826640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.826722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.826748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.826898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.826919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.826996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.827017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.827096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.827117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.827218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.827241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.827321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.827343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.827430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.827451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.827527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.827549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.827644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.827665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.827743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.827765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.827845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.827867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.827952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.827974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.828058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.828079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.828179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.828202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.828354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.828377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.828461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.828484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.828588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.828610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.828692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.828715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.828796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.828818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.828898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.828919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.829020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.829043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.061 [2024-12-10 00:15:05.829136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.061 [2024-12-10 00:15:05.829171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.061 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.829253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.829274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.829364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.829388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.829465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.829486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.829571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.829593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.829681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.829702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.829852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.829924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.830047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.830087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.830216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.830251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.830353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.830388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.830495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.830529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.830707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.830741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.830836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.830860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.831013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.831034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.831108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.831129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.831242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.831264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.831407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.831429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.831513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.831535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.831609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.831631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.831706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.831733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.831815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.831836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.831935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.831956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.832042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.832063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.832218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.832242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.832473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.832496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.832582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.832604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.832709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.832734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.832897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.832921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.833020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.833042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.833121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.833148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.833287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.833310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.833397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.833419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.833591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.833614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.833698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.833721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.833813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.833836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.833930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.833953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.834036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.834058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.834164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.834188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.062 [2024-12-10 00:15:05.834345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.062 [2024-12-10 00:15:05.834368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.062 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.834460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.834484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.834730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.834754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.834835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.834857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.834936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.834965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.835054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.835077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.835155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.835186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.835368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.835390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.835504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.835545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.835671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.835704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.835905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.835939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f029c000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.836107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.836131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.836289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.836315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.063 [2024-12-10 00:15:05.836405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.836430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.836520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.836544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.836639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.836661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:31.063 [2024-12-10 00:15:05.836758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.836782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.836890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.836913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.837001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.837023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.837114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.837138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.837256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.837279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.837362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.837384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.837468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:31.063 [2024-12-10 00:15:05.837491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.837600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.837622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.837791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.837814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.837961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.837984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.838060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.838082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.838184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.838209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.838296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.838319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.838397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.838421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.838513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.838536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.838613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.838635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.838733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.838757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.838858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.838882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.838972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.838994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.839077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.839099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.839175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.839197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.839295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.839318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.839400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.839423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.839568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.839591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.063 [2024-12-10 00:15:05.839737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.063 [2024-12-10 00:15:05.839759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.063 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.839859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.839881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.839974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.839996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.840086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.840109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.840195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.840218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.840367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.840391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.840501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.840541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.840661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.840695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.840816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.840849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.840952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.840985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.841093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.841126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.841275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.841330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c9be0 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.841449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.841474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.841555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.841577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.841725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.841747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.841829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.841851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.842012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.842035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.842121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.842144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.842254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.842279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.842402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.842427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.842534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.842560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.842736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.842761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.842853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.842879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.842978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.843004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.843107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.843132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.843232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.843259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.843415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.843442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.843542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.843568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.843794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.843820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.843928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.843953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.844057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.844082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.844174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.844201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.844312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.844337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.844446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.844472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.844561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.844586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.844691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.844716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.844806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.844831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.844924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.844949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.064 qpair failed and we were unable to recover it. 00:33:31.064 [2024-12-10 00:15:05.845038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.064 [2024-12-10 00:15:05.845063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.845164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.845191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.845284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.845309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.845404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.845429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.845572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.845598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.845748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.845773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.845931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.845957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.846059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.846085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.846188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.846226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.846314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.846340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.846428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.846454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.846550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.846577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.846677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.846703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.846789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.846814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.846927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.846953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.847039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.847064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.847152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.847184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.847276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.847302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.847391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.847416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.847518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.847543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.847641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.847667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.847836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.847862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.847947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.847974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.848065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.848089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.848183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.848209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.848303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.848329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.848416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.848441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.848612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.848637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.848745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.848770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.848856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.848882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.848976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.849001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.849180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.849206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.849310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.849336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.849494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.849519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.849622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.849648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.849750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.849775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.849861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.849887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.849981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.850006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.850093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.850119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.850242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.850268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.065 [2024-12-10 00:15:05.850365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.065 [2024-12-10 00:15:05.850391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.065 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.850613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.850638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.850794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.850820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.850908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.850934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.851028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.851054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.851156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.851202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.851300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.851325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.851418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.851443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.851599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.851629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.851727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.851752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.851847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.851873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.851966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.851991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.852102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.852128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.852292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.852318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.852411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.852436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.852525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.852551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.852638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.852664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.852757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.852782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.852965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.852991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.853086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.853112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.853372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.853398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.853483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.853509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.853601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.853626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.853712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.853737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.853826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.853851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.853937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.853962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.854046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.854071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.854177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.854203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.854371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.854396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.854488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.854514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.854603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.854629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.854715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.854740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.854892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.854917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.855092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.855117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.855232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.855258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.855369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.855395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.855492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.855517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.855604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.855629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.855717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.855742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.855838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.855864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.855960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.855985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.856075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.066 [2024-12-10 00:15:05.856101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.066 qpair failed and we were unable to recover it. 00:33:31.066 [2024-12-10 00:15:05.856201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.856228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.856330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.856356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.856530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.856556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.856674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.856699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.856854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.856880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.856976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.857001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.857087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.857122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.857230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.857256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.857346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.857373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.857466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.857491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.857583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.857608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.857713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.857738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.857838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.857864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.857948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.857974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.858064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.858089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.858184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.858211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.858312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.858338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.858427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.858452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.858623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.858649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.858735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.858760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.858918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.858944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.859029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.859054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.859137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.859168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.859257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.859282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.859383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.859409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.859489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.859514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.859619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.859643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.859733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.859759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.859848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.859873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.860024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.860049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.860132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.860165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.860323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.860348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.860437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.860462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.860564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.860590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.860690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.860716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.860806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.860831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.860932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.860958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.861113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.861138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.861285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.861311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.861409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.861434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.861533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.861559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.067 [2024-12-10 00:15:05.861659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.067 [2024-12-10 00:15:05.861685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.067 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.861768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.861793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.861889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.861915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.861998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.862023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.862111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.862137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.862235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.862265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.862434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.862459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.862634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.862660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.862814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.862839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.862927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.862953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.863042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.863067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.863178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.863203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.863306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.863330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.863420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.863444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.863525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.863550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.863706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.863730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.863811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.863834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.863923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.863947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.864038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.864062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.864223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.864248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.864407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.864431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.864522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.864546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.864630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.864654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.864775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.864799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.864885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.864909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.864994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.865018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.865232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.865257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.865412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.865438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.865533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.865557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.865638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.865662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.865764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.865789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.865882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.865906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.866013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.866037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.866115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.866140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.866289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.866313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.866475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.866500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.866644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.866667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.866910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.068 [2024-12-10 00:15:05.866935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.068 qpair failed and we were unable to recover it. 00:33:31.068 [2024-12-10 00:15:05.867026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.867051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.867142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.867171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.867252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.867276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.867364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.867389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.867554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.867579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.867731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.867755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.867909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.867933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.868013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.868041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.868128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.868152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.868262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.868286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.868367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.868390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.868492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.868517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.868628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.868652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.868736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.868760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.868842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.868867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.868959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.868984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.869084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.869108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.869205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.869231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.869314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.869338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.869422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.869447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.869528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.869551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.869661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.869686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.869769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.869792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.869963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.869987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.870098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.870122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.870238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.870263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.870346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.870371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.870467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.870491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.870574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.870598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.870803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.870827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.870930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.870955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.871112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.871136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.871290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.871315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.871400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.871424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.871536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.871559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.871642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.871666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.871768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.871792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.871879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.871903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.872002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.872026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.872183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.872208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.872306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.872330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.872479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.872502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.872671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.872696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.069 [2024-12-10 00:15:05.872793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.069 [2024-12-10 00:15:05.872817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.069 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.872987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.873010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.873104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.873128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 Malloc0 00:33:31.070 [2024-12-10 00:15:05.873323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.873348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.873444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.873472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.873566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.873590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.873742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.873767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.873925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.070 [2024-12-10 00:15:05.873949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.874044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.874069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.874228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.874253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:31.070 [2024-12-10 00:15:05.874350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.874375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.874482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.874506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.874598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.874622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.070 [2024-12-10 00:15:05.874706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.874729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.874812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.874837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:31.070 [2024-12-10 00:15:05.875007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.875031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.875119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.875144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.875234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.875257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.875349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.875373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.875464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.875488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.875579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.875602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.875695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.875719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.875815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.875839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.875923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.875947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.876037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.876061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.876145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.876175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.876259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.876282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.876367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.876391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.876551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.876574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.876663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.876691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.876794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.876818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.876902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.876925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.877015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.877039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.877224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.877249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.877337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.877361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.877459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.877482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.877566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.877590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.877677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.877701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.877789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.877812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.877987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.878011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.878101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.878125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.878219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.878243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.878329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.878353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.878450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.878474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.878566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.878590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.878690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.878714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.878824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.878849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.878944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.878967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.879058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.879081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.879249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.879273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.879366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.879390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.879480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.879504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.879588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.879613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.879774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.879797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.879889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.879912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.880068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.880092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.880192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.880216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.880303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.880327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.880509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.880532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.880633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.880658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.880756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.880780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.880813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.070 [2024-12-10 00:15:05.880963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.880987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.881085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.881109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.881262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.881289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.881372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.881395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.881489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.881513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.881610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.881634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.881783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.881807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.881889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.881913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.882005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.882030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.882131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.070 [2024-12-10 00:15:05.882154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.070 qpair failed and we were unable to recover it. 00:33:31.070 [2024-12-10 00:15:05.882253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.882277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.882361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.882386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.882484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.882508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.882590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.882614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.882698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.882723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.882807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.882831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.882931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.882955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.883056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.883080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.883252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.883276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.883374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.883398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.883557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.883581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.883679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.883708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.883817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.883841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.883948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.883972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.884058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.884081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.884192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.884216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.884325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.884349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.884446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.884471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.884553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.884577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.884671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.884695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.884777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.884801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.884896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.884919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.885086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.885110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.885210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.885235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.885326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.885350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.885438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.885462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.885543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.885567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.885731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.885755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.885843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.885868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.885950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.885973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.886063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.886087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.886174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.886199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.886296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.886320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.886405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.886430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.886585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.886608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.886695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.886720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.886832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.886855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.887015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.887039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.887146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.887178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.887344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.887368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.887457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.887481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.887561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.887584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.887685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.887709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.887803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.887828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.887912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.887936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.888044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.888069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.888218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.888245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.888332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.888356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.888456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.888480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.888583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.888608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.888702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.888727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.888810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.888838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.888948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.888972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.889070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.889096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.889184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.889210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.889373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.071 [2024-12-10 00:15:05.889398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.889488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.889512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.889606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.889631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.889726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:31.071 [2024-12-10 00:15:05.889751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.889840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.889864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.889940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.889965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.071 [2024-12-10 00:15:05.890063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.890087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.890190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.890214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:31.071 [2024-12-10 00:15:05.890407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.890433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.890528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.890552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.890717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.071 [2024-12-10 00:15:05.890742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.071 qpair failed and we were unable to recover it. 00:33:31.071 [2024-12-10 00:15:05.890840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.890864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.890951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.890975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.891072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.891096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.891181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.891206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.891286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.891309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.891400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.891424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.891509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.891534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.891634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.891657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.891752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.891779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.891864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.891887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.892025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.892084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.892223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.892264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.892373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.892408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.892531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.892564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.892688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.892722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.892836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.892870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.892966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.892993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.893093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.893117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.893228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.893253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.893339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.893363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.893444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.893467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.893554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.893578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.893672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.893695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.893778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.893806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.893892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.893915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.894005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.894030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.894125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.894149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.894273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.894299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.894387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.894410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.894499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.894523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.894606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.894629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.894728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.894752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.894840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.894864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.894948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.894972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.895056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.895080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.895174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.895199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.895298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.895322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.895413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.895438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.895521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.895544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.895626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.895651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.895824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.895848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.895946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.895970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.896122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.896146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.896238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.896264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.896350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.896374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.896458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.896482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.896584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.896608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.896695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.896719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.896803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.896827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.896922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.896948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.897033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.897058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.897148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.897211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.897309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.897333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.072 [2024-12-10 00:15:05.897494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.897519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.897607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.897631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.897747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.897771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:31.072 [2024-12-10 00:15:05.897851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.897876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.897987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.898010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.072 [2024-12-10 00:15:05.898112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.898137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.898229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.898253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.898341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.898365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:31.072 [2024-12-10 00:15:05.898453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.898476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.898569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.898594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.898728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.898781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.898903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.898939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.899049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.899083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0290000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.899208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.899234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.899321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.899346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.899447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.899470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.899571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.899595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.899688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.899712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.899804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.899829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.072 [2024-12-10 00:15:05.899931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.072 [2024-12-10 00:15:05.899956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.072 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.900050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.900075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.900181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.900206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.900312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.900336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.900436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.900459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.900611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.900634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.900790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.900813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.900918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.900942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.901050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.901073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.901163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.901188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.901289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.901312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.901407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.901431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.901523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.901547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.901640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.901664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.901779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.901803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.901888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.901912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.901996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.902024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.902115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.902139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.902230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.902254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.902415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.902440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.902530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.902553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.902642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.902666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.902757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.902781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.902954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.902978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.903067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.903089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.903201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.903226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.903331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.903356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.903458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.903482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.903569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.903593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.903742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.903765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.903864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.903889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.903972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.903995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.904087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.904111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.904196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.904220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.904303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.904329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.904426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.904451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.904535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.904558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.904655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.904678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.904847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.904871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.904988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.905012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.905101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.905125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.905243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.905268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.905353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.905378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.073 [2024-12-10 00:15:05.905471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.905495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.905586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.905610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.905686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.905709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.905788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.905812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.905911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.905934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.906022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.906046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.073 [2024-12-10 00:15:05.906127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.906151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.906258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.906282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.906366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.906390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:31.073 [2024-12-10 00:15:05.906479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.906504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.906584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.906607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.906691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.906722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.906812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.906836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.906915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.906940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.907047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.907071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.907167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.907191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.907275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.907298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.907402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.907427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.907509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.907533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.907626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.907649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.907739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.907763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.907858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.907883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.907965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.907989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.908086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.908110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.908197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.908222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.908317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.908342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.908422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.908446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.908529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.073 [2024-12-10 00:15:05.908553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.073 qpair failed and we were unable to recover it. 00:33:31.073 [2024-12-10 00:15:05.908669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.074 [2024-12-10 00:15:05.908692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.074 qpair failed and we were unable to recover it. 00:33:31.074 [2024-12-10 00:15:05.908778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.074 [2024-12-10 00:15:05.908801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.074 qpair failed and we were unable to recover it. 00:33:31.074 [2024-12-10 00:15:05.908951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:31.074 [2024-12-10 00:15:05.908975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0294000b90 with addr=10.0.0.2, port=4420 00:33:31.074 qpair failed and we were unable to recover it. 00:33:31.074 [2024-12-10 00:15:05.909028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.074 [2024-12-10 00:15:05.911456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.074 [2024-12-10 00:15:05.911550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.074 [2024-12-10 00:15:05.911583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.074 [2024-12-10 00:15:05.911600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.074 [2024-12-10 00:15:05.911614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.074 [2024-12-10 00:15:05.911653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.074 qpair failed and we were unable to recover it. 00:33:31.074 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.074 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:31.074 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.074 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:31.342 [2024-12-10 00:15:05.921394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.342 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.342 [2024-12-10 00:15:05.921491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.342 [2024-12-10 00:15:05.921518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.342 [2024-12-10 00:15:05.921539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.342 [2024-12-10 00:15:05.921556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.342 [2024-12-10 00:15:05.921584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.342 qpair failed and we were unable to recover it. 00:33:31.342 00:15:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 530644 00:33:31.342 [2024-12-10 00:15:05.931378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.342 [2024-12-10 00:15:05.931444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.342 [2024-12-10 00:15:05.931464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.342 [2024-12-10 00:15:05.931472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.342 [2024-12-10 00:15:05.931480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.342 [2024-12-10 00:15:05.931499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.342 qpair failed and we were unable to recover it. 00:33:31.342 [2024-12-10 00:15:05.941354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.342 [2024-12-10 00:15:05.941414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.342 [2024-12-10 00:15:05.941429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.342 [2024-12-10 00:15:05.941437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.342 [2024-12-10 00:15:05.941444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.342 [2024-12-10 00:15:05.941460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.342 qpair failed and we were unable to recover it. 00:33:31.342 [2024-12-10 00:15:05.951326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.342 [2024-12-10 00:15:05.951385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.342 [2024-12-10 00:15:05.951400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.342 [2024-12-10 00:15:05.951407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.342 [2024-12-10 00:15:05.951414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.342 [2024-12-10 00:15:05.951430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.342 qpair failed and we were unable to recover it. 00:33:31.342 [2024-12-10 00:15:05.961387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.342 [2024-12-10 00:15:05.961455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.342 [2024-12-10 00:15:05.961469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.342 [2024-12-10 00:15:05.961477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.342 [2024-12-10 00:15:05.961488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.342 [2024-12-10 00:15:05.961504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.342 qpair failed and we were unable to recover it. 00:33:31.342 [2024-12-10 00:15:05.971402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.342 [2024-12-10 00:15:05.971466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.342 [2024-12-10 00:15:05.971480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.342 [2024-12-10 00:15:05.971487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.342 [2024-12-10 00:15:05.971493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.342 [2024-12-10 00:15:05.971509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.342 qpair failed and we were unable to recover it. 00:33:31.342 [2024-12-10 00:15:05.981424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.342 [2024-12-10 00:15:05.981482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.342 [2024-12-10 00:15:05.981495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.342 [2024-12-10 00:15:05.981502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.342 [2024-12-10 00:15:05.981509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.342 [2024-12-10 00:15:05.981524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.342 qpair failed and we were unable to recover it. 00:33:31.342 [2024-12-10 00:15:05.991505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.342 [2024-12-10 00:15:05.991560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.342 [2024-12-10 00:15:05.991574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.342 [2024-12-10 00:15:05.991581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.342 [2024-12-10 00:15:05.991588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.342 [2024-12-10 00:15:05.991603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.342 qpair failed and we were unable to recover it. 00:33:31.342 [2024-12-10 00:15:06.001542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.342 [2024-12-10 00:15:06.001595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.342 [2024-12-10 00:15:06.001609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.001616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.001622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.001638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.011511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.011603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.011618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.011626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.011633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.011649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.021511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.021570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.021585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.021593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.021599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.021616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.031534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.031602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.031618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.031625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.031632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.031648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.041606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.041690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.041706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.041713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.041720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.041736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.051583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.051635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.051653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.051661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.051668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.051684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.061724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.061795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.061810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.061817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.061823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.061839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.071701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.071763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.071777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.071784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.071791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.071806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.081742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.081815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.081830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.081837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.081843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.081859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.091706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.091759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.091774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.091786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.091793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.091808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.101754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.101825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.101840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.101848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.101855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.101871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.111811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.111881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.111896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.111903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.111910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.111926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.121780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.121834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.121849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.121856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.121862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.121878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.131867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.131923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.131938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.131945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.131951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.131967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.141854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.141945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.141960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.141967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.141973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.141988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.151931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.151991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.152005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.152013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.152019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.152035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.162009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.162071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.162086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.162094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.162100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.162116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.171990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.172050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.172064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.172071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.172078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.172093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.182041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.182120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.182134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.182142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.182147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.182172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.192004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.192066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.343 [2024-12-10 00:15:06.192080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.343 [2024-12-10 00:15:06.192087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.343 [2024-12-10 00:15:06.192093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.343 [2024-12-10 00:15:06.192109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.343 qpair failed and we were unable to recover it. 00:33:31.343 [2024-12-10 00:15:06.202128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.343 [2024-12-10 00:15:06.202220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.344 [2024-12-10 00:15:06.202235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.344 [2024-12-10 00:15:06.202242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.344 [2024-12-10 00:15:06.202248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.344 [2024-12-10 00:15:06.202264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.344 qpair failed and we were unable to recover it. 00:33:31.344 [2024-12-10 00:15:06.212116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.344 [2024-12-10 00:15:06.212179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.344 [2024-12-10 00:15:06.212193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.344 [2024-12-10 00:15:06.212200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.344 [2024-12-10 00:15:06.212207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.344 [2024-12-10 00:15:06.212222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.344 qpair failed and we were unable to recover it. 00:33:31.344 [2024-12-10 00:15:06.222164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.344 [2024-12-10 00:15:06.222276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.344 [2024-12-10 00:15:06.222291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.344 [2024-12-10 00:15:06.222302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.344 [2024-12-10 00:15:06.222308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.344 [2024-12-10 00:15:06.222324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.344 qpair failed and we were unable to recover it. 00:33:31.344 [2024-12-10 00:15:06.232117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.344 [2024-12-10 00:15:06.232178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.344 [2024-12-10 00:15:06.232193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.344 [2024-12-10 00:15:06.232202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.344 [2024-12-10 00:15:06.232209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.344 [2024-12-10 00:15:06.232225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.344 qpair failed and we were unable to recover it. 00:33:31.344 [2024-12-10 00:15:06.242189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.344 [2024-12-10 00:15:06.242245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.344 [2024-12-10 00:15:06.242260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.344 [2024-12-10 00:15:06.242279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.344 [2024-12-10 00:15:06.242286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.344 [2024-12-10 00:15:06.242303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.344 qpair failed and we were unable to recover it. 00:33:31.344 [2024-12-10 00:15:06.252229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.344 [2024-12-10 00:15:06.252285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.344 [2024-12-10 00:15:06.252299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.344 [2024-12-10 00:15:06.252306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.344 [2024-12-10 00:15:06.252313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.344 [2024-12-10 00:15:06.252330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.344 qpair failed and we were unable to recover it. 00:33:31.344 [2024-12-10 00:15:06.262222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.344 [2024-12-10 00:15:06.262287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.344 [2024-12-10 00:15:06.262310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.344 [2024-12-10 00:15:06.262321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.344 [2024-12-10 00:15:06.262330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.344 [2024-12-10 00:15:06.262356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.344 qpair failed and we were unable to recover it. 00:33:31.640 [2024-12-10 00:15:06.272370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.640 [2024-12-10 00:15:06.272462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.640 [2024-12-10 00:15:06.272481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.640 [2024-12-10 00:15:06.272489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.640 [2024-12-10 00:15:06.272496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.640 [2024-12-10 00:15:06.272513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.640 qpair failed and we were unable to recover it. 00:33:31.640 [2024-12-10 00:15:06.282333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.640 [2024-12-10 00:15:06.282400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.640 [2024-12-10 00:15:06.282415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.640 [2024-12-10 00:15:06.282422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.640 [2024-12-10 00:15:06.282429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.640 [2024-12-10 00:15:06.282445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.640 qpair failed and we were unable to recover it. 00:33:31.640 [2024-12-10 00:15:06.292340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.640 [2024-12-10 00:15:06.292397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.640 [2024-12-10 00:15:06.292411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.640 [2024-12-10 00:15:06.292418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.640 [2024-12-10 00:15:06.292425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.640 [2024-12-10 00:15:06.292440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.640 qpair failed and we were unable to recover it. 00:33:31.640 [2024-12-10 00:15:06.302364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.640 [2024-12-10 00:15:06.302443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.640 [2024-12-10 00:15:06.302457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.640 [2024-12-10 00:15:06.302465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.640 [2024-12-10 00:15:06.302471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.640 [2024-12-10 00:15:06.302486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.640 qpair failed and we were unable to recover it. 00:33:31.640 [2024-12-10 00:15:06.312417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.312476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.312490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.312498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.312505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.312520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.322461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.322531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.322545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.322553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.322559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.322574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.332462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.332518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.332532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.332539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.332545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.332560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.342492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.342558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.342571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.342579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.342585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.342600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.352524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.352625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.352643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.352650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.352656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.352671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.362521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.362577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.362591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.362599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.362606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.362621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.372561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.372613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.372627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.372634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.372641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.372656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.382614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.382689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.382703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.382710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.382717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.382732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.392597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.392700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.392714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.392721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.392730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.392746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.402659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.402716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.402729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.402736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.402743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.402758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.412663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.412719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.412732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.412739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.412746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.412761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.422717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.422785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.422799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.422806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.422812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.422827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.432766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.432820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.432833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.432840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.641 [2024-12-10 00:15:06.432847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.641 [2024-12-10 00:15:06.432862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.641 qpair failed and we were unable to recover it. 00:33:31.641 [2024-12-10 00:15:06.442774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.641 [2024-12-10 00:15:06.442850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.641 [2024-12-10 00:15:06.442863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.641 [2024-12-10 00:15:06.442871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.442878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.442893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.642 [2024-12-10 00:15:06.452812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.642 [2024-12-10 00:15:06.452867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.642 [2024-12-10 00:15:06.452881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.642 [2024-12-10 00:15:06.452888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.452895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.452910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.642 [2024-12-10 00:15:06.462838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.642 [2024-12-10 00:15:06.462896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.642 [2024-12-10 00:15:06.462909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.642 [2024-12-10 00:15:06.462917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.462924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.462939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.642 [2024-12-10 00:15:06.472863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.642 [2024-12-10 00:15:06.472921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.642 [2024-12-10 00:15:06.472934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.642 [2024-12-10 00:15:06.472941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.472948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.472962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.642 [2024-12-10 00:15:06.482879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.642 [2024-12-10 00:15:06.482933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.642 [2024-12-10 00:15:06.482949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.642 [2024-12-10 00:15:06.482956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.482963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.482978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.642 [2024-12-10 00:15:06.492926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.642 [2024-12-10 00:15:06.493001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.642 [2024-12-10 00:15:06.493014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.642 [2024-12-10 00:15:06.493021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.493028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.493043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.642 [2024-12-10 00:15:06.502968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.642 [2024-12-10 00:15:06.503043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.642 [2024-12-10 00:15:06.503059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.642 [2024-12-10 00:15:06.503069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.503076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.503093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.642 [2024-12-10 00:15:06.513023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.642 [2024-12-10 00:15:06.513083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.642 [2024-12-10 00:15:06.513097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.642 [2024-12-10 00:15:06.513104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.513110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.513125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.642 [2024-12-10 00:15:06.523055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.642 [2024-12-10 00:15:06.523107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.642 [2024-12-10 00:15:06.523121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.642 [2024-12-10 00:15:06.523128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.523138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.523153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.642 [2024-12-10 00:15:06.533040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.642 [2024-12-10 00:15:06.533093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.642 [2024-12-10 00:15:06.533106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.642 [2024-12-10 00:15:06.533113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.533119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.533134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.642 [2024-12-10 00:15:06.543085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.642 [2024-12-10 00:15:06.543141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.642 [2024-12-10 00:15:06.543154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.642 [2024-12-10 00:15:06.543168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.543174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.543191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.642 [2024-12-10 00:15:06.553123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.642 [2024-12-10 00:15:06.553234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.642 [2024-12-10 00:15:06.553250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.642 [2024-12-10 00:15:06.553258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.642 [2024-12-10 00:15:06.553264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.642 [2024-12-10 00:15:06.553280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.642 qpair failed and we were unable to recover it. 00:33:31.925 [2024-12-10 00:15:06.563167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.925 [2024-12-10 00:15:06.563230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.925 [2024-12-10 00:15:06.563245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.925 [2024-12-10 00:15:06.563252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.925 [2024-12-10 00:15:06.563258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.925 [2024-12-10 00:15:06.563274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.925 qpair failed and we were unable to recover it. 00:33:31.925 [2024-12-10 00:15:06.573202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.925 [2024-12-10 00:15:06.573258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.925 [2024-12-10 00:15:06.573272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.925 [2024-12-10 00:15:06.573280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.925 [2024-12-10 00:15:06.573287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.925 [2024-12-10 00:15:06.573302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.925 qpair failed and we were unable to recover it. 00:33:31.925 [2024-12-10 00:15:06.583217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.925 [2024-12-10 00:15:06.583279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.925 [2024-12-10 00:15:06.583293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.925 [2024-12-10 00:15:06.583300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.925 [2024-12-10 00:15:06.583307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.925 [2024-12-10 00:15:06.583322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.925 qpair failed and we were unable to recover it. 00:33:31.925 [2024-12-10 00:15:06.593235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.925 [2024-12-10 00:15:06.593292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.925 [2024-12-10 00:15:06.593305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.925 [2024-12-10 00:15:06.593312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.925 [2024-12-10 00:15:06.593318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.925 [2024-12-10 00:15:06.593334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.925 qpair failed and we were unable to recover it. 00:33:31.925 [2024-12-10 00:15:06.603254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.925 [2024-12-10 00:15:06.603307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.925 [2024-12-10 00:15:06.603321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.925 [2024-12-10 00:15:06.603328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.925 [2024-12-10 00:15:06.603335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.925 [2024-12-10 00:15:06.603350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.925 qpair failed and we were unable to recover it. 00:33:31.925 [2024-12-10 00:15:06.613273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.925 [2024-12-10 00:15:06.613332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.925 [2024-12-10 00:15:06.613349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.925 [2024-12-10 00:15:06.613357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.925 [2024-12-10 00:15:06.613363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.925 [2024-12-10 00:15:06.613379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.925 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.623291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.623349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.623363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.623370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.623377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.623391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.633305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.633364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.633378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.633385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.633392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.633407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.643326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.643379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.643393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.643400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.643408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.643424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.653368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.653420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.653433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.653444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.653450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.653466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.663371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.663444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.663458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.663465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.663472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.663487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.673450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.673508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.673521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.673529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.673536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.673552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.683498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.683558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.683572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.683581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.683587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.683603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.693504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.693559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.693573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.693580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.693587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.693602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.703563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.703628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.703641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.703649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.703656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.703671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.713608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.713679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.713692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.713700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.713706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.713722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.723607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.723658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.723671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.723678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.723684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.723699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.733628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.733688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.733711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.733719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.733725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.733745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.743583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.926 [2024-12-10 00:15:06.743645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.926 [2024-12-10 00:15:06.743659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.926 [2024-12-10 00:15:06.743666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.926 [2024-12-10 00:15:06.743672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.926 [2024-12-10 00:15:06.743688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.926 qpair failed and we were unable to recover it. 00:33:31.926 [2024-12-10 00:15:06.753674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.927 [2024-12-10 00:15:06.753733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.927 [2024-12-10 00:15:06.753747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.927 [2024-12-10 00:15:06.753755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.927 [2024-12-10 00:15:06.753761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.927 [2024-12-10 00:15:06.753777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.927 qpair failed and we were unable to recover it. 00:33:31.927 [2024-12-10 00:15:06.763693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.927 [2024-12-10 00:15:06.763751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.927 [2024-12-10 00:15:06.763764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.927 [2024-12-10 00:15:06.763771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.927 [2024-12-10 00:15:06.763777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.927 [2024-12-10 00:15:06.763794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.927 qpair failed and we were unable to recover it. 00:33:31.927 [2024-12-10 00:15:06.773734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.927 [2024-12-10 00:15:06.773795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.927 [2024-12-10 00:15:06.773809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.927 [2024-12-10 00:15:06.773817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.927 [2024-12-10 00:15:06.773823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.927 [2024-12-10 00:15:06.773838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.927 qpair failed and we were unable to recover it. 00:33:31.927 [2024-12-10 00:15:06.783760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.927 [2024-12-10 00:15:06.783818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.927 [2024-12-10 00:15:06.783832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.927 [2024-12-10 00:15:06.783843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.927 [2024-12-10 00:15:06.783849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.927 [2024-12-10 00:15:06.783865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.927 qpair failed and we were unable to recover it. 00:33:31.927 [2024-12-10 00:15:06.793785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.927 [2024-12-10 00:15:06.793843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.927 [2024-12-10 00:15:06.793856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.927 [2024-12-10 00:15:06.793864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.927 [2024-12-10 00:15:06.793871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.927 [2024-12-10 00:15:06.793886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.927 qpair failed and we were unable to recover it. 00:33:31.927 [2024-12-10 00:15:06.803814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.927 [2024-12-10 00:15:06.803867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.927 [2024-12-10 00:15:06.803881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.927 [2024-12-10 00:15:06.803887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.927 [2024-12-10 00:15:06.803894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.927 [2024-12-10 00:15:06.803909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.927 qpair failed and we were unable to recover it. 00:33:31.927 [2024-12-10 00:15:06.813826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.927 [2024-12-10 00:15:06.813878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.927 [2024-12-10 00:15:06.813890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.927 [2024-12-10 00:15:06.813897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.927 [2024-12-10 00:15:06.813904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.927 [2024-12-10 00:15:06.813920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.927 qpair failed and we were unable to recover it. 00:33:31.927 [2024-12-10 00:15:06.823872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.927 [2024-12-10 00:15:06.823925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.927 [2024-12-10 00:15:06.823939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.927 [2024-12-10 00:15:06.823946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.927 [2024-12-10 00:15:06.823953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.927 [2024-12-10 00:15:06.823973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.927 qpair failed and we were unable to recover it. 00:33:31.927 [2024-12-10 00:15:06.833889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.927 [2024-12-10 00:15:06.833945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.927 [2024-12-10 00:15:06.833958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.927 [2024-12-10 00:15:06.833965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.927 [2024-12-10 00:15:06.833972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.927 [2024-12-10 00:15:06.833988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.927 qpair failed and we were unable to recover it. 00:33:31.927 [2024-12-10 00:15:06.843936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.927 [2024-12-10 00:15:06.843991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.927 [2024-12-10 00:15:06.844004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.927 [2024-12-10 00:15:06.844011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.927 [2024-12-10 00:15:06.844018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.927 [2024-12-10 00:15:06.844033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.927 qpair failed and we were unable to recover it. 00:33:31.927 [2024-12-10 00:15:06.853986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:31.927 [2024-12-10 00:15:06.854041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:31.927 [2024-12-10 00:15:06.854055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:31.927 [2024-12-10 00:15:06.854062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:31.927 [2024-12-10 00:15:06.854069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:31.927 [2024-12-10 00:15:06.854085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:31.927 qpair failed and we were unable to recover it. 00:33:32.205 [2024-12-10 00:15:06.864024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.205 [2024-12-10 00:15:06.864101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.205 [2024-12-10 00:15:06.864115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.205 [2024-12-10 00:15:06.864121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.205 [2024-12-10 00:15:06.864128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.205 [2024-12-10 00:15:06.864143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.205 qpair failed and we were unable to recover it. 00:33:32.205 [2024-12-10 00:15:06.874050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.205 [2024-12-10 00:15:06.874111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.205 [2024-12-10 00:15:06.874125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.205 [2024-12-10 00:15:06.874133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.205 [2024-12-10 00:15:06.874139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.874155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.884026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.884083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.884097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.884104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.884111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.884127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.894065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.894118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.894131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.894138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.894144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.894165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.904023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.904082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.904096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.904103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.904110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.904125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.914131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.914192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.914209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.914216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.914222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.914238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.924193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.924259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.924273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.924280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.924286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.924302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.934183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.934238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.934252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.934260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.934267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.934281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.944228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.944286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.944299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.944306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.944312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.944328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.954257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.954315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.954328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.954335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.954345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.954360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.964288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.964344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.964358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.964366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.964372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.964388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.974316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.974365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.974378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.974385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.974391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.974407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.984362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.984420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.984433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.984440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.984446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.984462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.206 [2024-12-10 00:15:06.994381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.206 [2024-12-10 00:15:06.994437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.206 [2024-12-10 00:15:06.994450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.206 [2024-12-10 00:15:06.994457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.206 [2024-12-10 00:15:06.994464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.206 [2024-12-10 00:15:06.994479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.206 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.004403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.004455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.004468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.004475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.004482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.004497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.014482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.014574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.014587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.014595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.014601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.014616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.024469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.024525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.024537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.024544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.024550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.024565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.034497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.034550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.034563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.034569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.034577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.034592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.044579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.044643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.044660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.044667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.044673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.044688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.054552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.054608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.054621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.054629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.054636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.054651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.064535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.064592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.064606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.064613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.064619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.064634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.074604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.074658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.074671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.074679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.074685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.074700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.084636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.084691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.084705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.084712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.084722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.084737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.094650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.094702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.094716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.094722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.094729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.094744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.104686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.104747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.104760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.104768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.104774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.104789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.114731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.114790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.114803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.114810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.114817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.114833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.207 [2024-12-10 00:15:07.124775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.207 [2024-12-10 00:15:07.124837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.207 [2024-12-10 00:15:07.124850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.207 [2024-12-10 00:15:07.124857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.207 [2024-12-10 00:15:07.124864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.207 [2024-12-10 00:15:07.124879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.207 qpair failed and we were unable to recover it. 00:33:32.500 [2024-12-10 00:15:07.134822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.500 [2024-12-10 00:15:07.134882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.500 [2024-12-10 00:15:07.134895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.500 [2024-12-10 00:15:07.134902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.500 [2024-12-10 00:15:07.134908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.500 [2024-12-10 00:15:07.134923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.500 qpair failed and we were unable to recover it. 00:33:32.500 [2024-12-10 00:15:07.144883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.500 [2024-12-10 00:15:07.144994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.500 [2024-12-10 00:15:07.145008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.500 [2024-12-10 00:15:07.145015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.500 [2024-12-10 00:15:07.145021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.500 [2024-12-10 00:15:07.145036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.500 qpair failed and we were unable to recover it. 00:33:32.500 [2024-12-10 00:15:07.154856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.500 [2024-12-10 00:15:07.154915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.500 [2024-12-10 00:15:07.154928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.500 [2024-12-10 00:15:07.154935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.500 [2024-12-10 00:15:07.154941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.500 [2024-12-10 00:15:07.154957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.500 qpair failed and we were unable to recover it. 00:33:32.500 [2024-12-10 00:15:07.164890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.500 [2024-12-10 00:15:07.164960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.500 [2024-12-10 00:15:07.164974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.500 [2024-12-10 00:15:07.164981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.500 [2024-12-10 00:15:07.164987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.500 [2024-12-10 00:15:07.165002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.500 qpair failed and we were unable to recover it. 00:33:32.500 [2024-12-10 00:15:07.174908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.500 [2024-12-10 00:15:07.174994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.500 [2024-12-10 00:15:07.175012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.500 [2024-12-10 00:15:07.175019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.500 [2024-12-10 00:15:07.175026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.175040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.184933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.184990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.185003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.185010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.185016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.185031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.194995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.195054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.195068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.195075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.195081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.195097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.205035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.205090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.205104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.205111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.205117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.205132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.215011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.215065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.215079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.215089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.215095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.215111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.225010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.225068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.225081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.225088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.225094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.225109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.235010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.235068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.235082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.235090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.235096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.235111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.245042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.245098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.245112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.245119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.245126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.245140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.255178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.255234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.255248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.255255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.255261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.255280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.265107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.265171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.265189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.265196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.265202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.265217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.275131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.275187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.275201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.275208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.275214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.275230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.285144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.285200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.285213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.285221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.285227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.285243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.295248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.295298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.295311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.295318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.295324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.295339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.305231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.501 [2024-12-10 00:15:07.305293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.501 [2024-12-10 00:15:07.305307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.501 [2024-12-10 00:15:07.305316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.501 [2024-12-10 00:15:07.305322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.501 [2024-12-10 00:15:07.305338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.501 qpair failed and we were unable to recover it. 00:33:32.501 [2024-12-10 00:15:07.315305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.315361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.315374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.315381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.315387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.315402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.502 [2024-12-10 00:15:07.325363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.325415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.325429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.325436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.325442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.325457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.502 [2024-12-10 00:15:07.335373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.335425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.335439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.335446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.335452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.335468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.502 [2024-12-10 00:15:07.345413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.345512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.345525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.345535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.345542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.345557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.502 [2024-12-10 00:15:07.355410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.355508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.355522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.355529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.355535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.355550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.502 [2024-12-10 00:15:07.365428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.365482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.365496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.365502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.365509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.365524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.502 [2024-12-10 00:15:07.375458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.375515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.375528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.375535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.375541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.375556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.502 [2024-12-10 00:15:07.385467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.385523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.385537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.385543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.385550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.385568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.502 [2024-12-10 00:15:07.395555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.395610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.395623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.395630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.395636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.395652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.502 [2024-12-10 00:15:07.405573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.405646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.405660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.405667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.405673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.405687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.502 [2024-12-10 00:15:07.415590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.415644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.415657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.415664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.415672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.415687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.502 [2024-12-10 00:15:07.425575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.502 [2024-12-10 00:15:07.425634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.502 [2024-12-10 00:15:07.425647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.502 [2024-12-10 00:15:07.425654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.502 [2024-12-10 00:15:07.425660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.502 [2024-12-10 00:15:07.425675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.502 qpair failed and we were unable to recover it. 00:33:32.784 [2024-12-10 00:15:07.435607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.784 [2024-12-10 00:15:07.435667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.784 [2024-12-10 00:15:07.435680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.784 [2024-12-10 00:15:07.435687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.784 [2024-12-10 00:15:07.435693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.784 [2024-12-10 00:15:07.435708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.784 qpair failed and we were unable to recover it. 00:33:32.784 [2024-12-10 00:15:07.445637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.784 [2024-12-10 00:15:07.445703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.784 [2024-12-10 00:15:07.445717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.784 [2024-12-10 00:15:07.445724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.784 [2024-12-10 00:15:07.445731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.784 [2024-12-10 00:15:07.445746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.784 qpair failed and we were unable to recover it. 00:33:32.784 [2024-12-10 00:15:07.455705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.784 [2024-12-10 00:15:07.455761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.784 [2024-12-10 00:15:07.455775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.784 [2024-12-10 00:15:07.455782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.784 [2024-12-10 00:15:07.455789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.784 [2024-12-10 00:15:07.455805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.784 qpair failed and we were unable to recover it. 00:33:32.784 [2024-12-10 00:15:07.465676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.784 [2024-12-10 00:15:07.465733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.784 [2024-12-10 00:15:07.465746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.784 [2024-12-10 00:15:07.465753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.784 [2024-12-10 00:15:07.465759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.784 [2024-12-10 00:15:07.465775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.784 qpair failed and we were unable to recover it. 00:33:32.784 [2024-12-10 00:15:07.475725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.784 [2024-12-10 00:15:07.475778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.784 [2024-12-10 00:15:07.475794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.784 [2024-12-10 00:15:07.475802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.784 [2024-12-10 00:15:07.475808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.784 [2024-12-10 00:15:07.475823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.784 qpair failed and we were unable to recover it. 00:33:32.784 [2024-12-10 00:15:07.485840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.784 [2024-12-10 00:15:07.485891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.784 [2024-12-10 00:15:07.485905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.784 [2024-12-10 00:15:07.485911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.784 [2024-12-10 00:15:07.485919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.784 [2024-12-10 00:15:07.485934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.784 qpair failed and we were unable to recover it. 00:33:32.784 [2024-12-10 00:15:07.495826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.784 [2024-12-10 00:15:07.495879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.495892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.495899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.495905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.495920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.505789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.505847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.505859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.505867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.505873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.505888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.515829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.515889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.515903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.515911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.515923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.515938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.525851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.525906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.525920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.525927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.525934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.525948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.535940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.535992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.536006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.536012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.536019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.536035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.545994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.546054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.546068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.546075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.546082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.546097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.556052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.556166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.556180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.556187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.556194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.556209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.565986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.566042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.566056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.566063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.566069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.566085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.576056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.576111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.576125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.576132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.576139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.576154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.586088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.586143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.586161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.586169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.586175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.586190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.596115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.596176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.596190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.596197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.596203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.596218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.606184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.606250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.606267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.606275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.606281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.606297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.616106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.616166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.616180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.616188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.616194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.785 [2024-12-10 00:15:07.616209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.785 qpair failed and we were unable to recover it. 00:33:32.785 [2024-12-10 00:15:07.626201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.785 [2024-12-10 00:15:07.626255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.785 [2024-12-10 00:15:07.626268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.785 [2024-12-10 00:15:07.626275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.785 [2024-12-10 00:15:07.626282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.786 [2024-12-10 00:15:07.626298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.786 qpair failed and we were unable to recover it. 00:33:32.786 [2024-12-10 00:15:07.636162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.786 [2024-12-10 00:15:07.636221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.786 [2024-12-10 00:15:07.636235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.786 [2024-12-10 00:15:07.636241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.786 [2024-12-10 00:15:07.636248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.786 [2024-12-10 00:15:07.636264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.786 qpair failed and we were unable to recover it. 00:33:32.786 [2024-12-10 00:15:07.646262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.786 [2024-12-10 00:15:07.646318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.786 [2024-12-10 00:15:07.646331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.786 [2024-12-10 00:15:07.646338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.786 [2024-12-10 00:15:07.646347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.786 [2024-12-10 00:15:07.646362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.786 qpair failed and we were unable to recover it. 00:33:32.786 [2024-12-10 00:15:07.656245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.786 [2024-12-10 00:15:07.656306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.786 [2024-12-10 00:15:07.656319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.786 [2024-12-10 00:15:07.656327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.786 [2024-12-10 00:15:07.656333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.786 [2024-12-10 00:15:07.656348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.786 qpair failed and we were unable to recover it. 00:33:32.786 [2024-12-10 00:15:07.666323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.786 [2024-12-10 00:15:07.666378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.786 [2024-12-10 00:15:07.666391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.786 [2024-12-10 00:15:07.666399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.786 [2024-12-10 00:15:07.666405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.786 [2024-12-10 00:15:07.666420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.786 qpair failed and we were unable to recover it. 00:33:32.786 [2024-12-10 00:15:07.676393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.786 [2024-12-10 00:15:07.676500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.786 [2024-12-10 00:15:07.676513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.786 [2024-12-10 00:15:07.676520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.786 [2024-12-10 00:15:07.676526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.786 [2024-12-10 00:15:07.676541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.786 qpair failed and we were unable to recover it. 00:33:32.786 [2024-12-10 00:15:07.686363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.786 [2024-12-10 00:15:07.686420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.786 [2024-12-10 00:15:07.686433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.786 [2024-12-10 00:15:07.686439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.786 [2024-12-10 00:15:07.686446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.786 [2024-12-10 00:15:07.686463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.786 qpair failed and we were unable to recover it. 00:33:32.786 [2024-12-10 00:15:07.696373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.786 [2024-12-10 00:15:07.696427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.786 [2024-12-10 00:15:07.696441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.786 [2024-12-10 00:15:07.696448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.786 [2024-12-10 00:15:07.696454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.786 [2024-12-10 00:15:07.696470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.786 qpair failed and we were unable to recover it. 00:33:32.786 [2024-12-10 00:15:07.706494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:32.786 [2024-12-10 00:15:07.706556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:32.786 [2024-12-10 00:15:07.706570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:32.786 [2024-12-10 00:15:07.706578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:32.786 [2024-12-10 00:15:07.706584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:32.786 [2024-12-10 00:15:07.706600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:32.786 qpair failed and we were unable to recover it. 00:33:33.065 [2024-12-10 00:15:07.716478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.065 [2024-12-10 00:15:07.716542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.065 [2024-12-10 00:15:07.716559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.065 [2024-12-10 00:15:07.716567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.065 [2024-12-10 00:15:07.716574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.716590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.726501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.726561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.726576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.726584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.726591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.726606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.736436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.736492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.736510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.736518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.736525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.736541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.746560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.746620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.746635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.746644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.746651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.746666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.756621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.756727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.756742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.756750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.756756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.756773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.766520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.766577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.766592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.766599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.766606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.766622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.776609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.776660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.776675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.776685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.776691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.776707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.786600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.786657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.786671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.786679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.786686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.786702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.796694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.796752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.796767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.796774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.796781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.796796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.806710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.806767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.806781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.806789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.806796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.806812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.816748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.816803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.816817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.816824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.816830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.816849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.826706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.826763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.826776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.826783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.826790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.826806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.836804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.836859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.836874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.836881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.836888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.836903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.066 qpair failed and we were unable to recover it. 00:33:33.066 [2024-12-10 00:15:07.846844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.066 [2024-12-10 00:15:07.846917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.066 [2024-12-10 00:15:07.846931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.066 [2024-12-10 00:15:07.846938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.066 [2024-12-10 00:15:07.846944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.066 [2024-12-10 00:15:07.846959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.856866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.856918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.856932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.856939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.856945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.856962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.866945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.867007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.867022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.867029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.867035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.867050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.876954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.877006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.877021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.877028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.877035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.877050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.886955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.887011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.887025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.887032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.887038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.887054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.896997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.897052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.897066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.897073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.897079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.897095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.907054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.907109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.907123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.907133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.907139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.907155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.917057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.917114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.917127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.917135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.917141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.917156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.927077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.927134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.927149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.927161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.927167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.927183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.937114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.937175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.937189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.937197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.937204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.937220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.947181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.947255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.947270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.947277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.947283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.947303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.957096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.957175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.957190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.957198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.957205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.957221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.967197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.967254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.967268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.967276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.967283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.967298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.977213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.067 [2024-12-10 00:15:07.977315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.067 [2024-12-10 00:15:07.977329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.067 [2024-12-10 00:15:07.977337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.067 [2024-12-10 00:15:07.977343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.067 [2024-12-10 00:15:07.977358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.067 qpair failed and we were unable to recover it. 00:33:33.067 [2024-12-10 00:15:07.987235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.068 [2024-12-10 00:15:07.987290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.068 [2024-12-10 00:15:07.987305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.068 [2024-12-10 00:15:07.987312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.068 [2024-12-10 00:15:07.987319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.068 [2024-12-10 00:15:07.987334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.068 qpair failed and we were unable to recover it. 00:33:33.354 [2024-12-10 00:15:07.997312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.354 [2024-12-10 00:15:07.997378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.354 [2024-12-10 00:15:07.997392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.354 [2024-12-10 00:15:07.997399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.354 [2024-12-10 00:15:07.997406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.354 [2024-12-10 00:15:07.997421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-12-10 00:15:08.007349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.354 [2024-12-10 00:15:08.007422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.354 [2024-12-10 00:15:08.007438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.354 [2024-12-10 00:15:08.007445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.354 [2024-12-10 00:15:08.007451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.354 [2024-12-10 00:15:08.007466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-12-10 00:15:08.017343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.354 [2024-12-10 00:15:08.017397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.354 [2024-12-10 00:15:08.017411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.354 [2024-12-10 00:15:08.017418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.354 [2024-12-10 00:15:08.017425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.354 [2024-12-10 00:15:08.017440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-12-10 00:15:08.027368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.354 [2024-12-10 00:15:08.027428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.354 [2024-12-10 00:15:08.027442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.354 [2024-12-10 00:15:08.027450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.354 [2024-12-10 00:15:08.027456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.354 [2024-12-10 00:15:08.027471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-12-10 00:15:08.037411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.354 [2024-12-10 00:15:08.037480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.354 [2024-12-10 00:15:08.037498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.354 [2024-12-10 00:15:08.037506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.354 [2024-12-10 00:15:08.037512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.354 [2024-12-10 00:15:08.037528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-12-10 00:15:08.047432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.354 [2024-12-10 00:15:08.047497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.354 [2024-12-10 00:15:08.047511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.354 [2024-12-10 00:15:08.047519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.354 [2024-12-10 00:15:08.047525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.354 [2024-12-10 00:15:08.047541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.354 qpair failed and we were unable to recover it. 00:33:33.354 [2024-12-10 00:15:08.057460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.354 [2024-12-10 00:15:08.057515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.354 [2024-12-10 00:15:08.057530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.057538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.057544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.057560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.067494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.067562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.067577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.067585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.067591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.067607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.077511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.077569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.077583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.077590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.077600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.077615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.087468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.087535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.087549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.087556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.087563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.087578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.097603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.097654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.097668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.097676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.097682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.097697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.107599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.107657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.107671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.107678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.107684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.107699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.117621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.117676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.117690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.117697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.117703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.117719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.127656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.127709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.127724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.127730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.127737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.127753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.137681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.137738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.137752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.137760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.137766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.137781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.147720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.147776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.147790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.147797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.147803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.147818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.157746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.157805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.157820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.157828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.157834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.157850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.167777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.167833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.167850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.167857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.167864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.167879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.177797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.177849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.177863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.177870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.177877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.355 [2024-12-10 00:15:08.177892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.355 qpair failed and we were unable to recover it. 00:33:33.355 [2024-12-10 00:15:08.187831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.355 [2024-12-10 00:15:08.187885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.355 [2024-12-10 00:15:08.187899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.355 [2024-12-10 00:15:08.187906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.355 [2024-12-10 00:15:08.187912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.356 [2024-12-10 00:15:08.187927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-12-10 00:15:08.197862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.356 [2024-12-10 00:15:08.197919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.356 [2024-12-10 00:15:08.197933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.356 [2024-12-10 00:15:08.197941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.356 [2024-12-10 00:15:08.197947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.356 [2024-12-10 00:15:08.197962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-12-10 00:15:08.207869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.356 [2024-12-10 00:15:08.207932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.356 [2024-12-10 00:15:08.207947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.356 [2024-12-10 00:15:08.207955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.356 [2024-12-10 00:15:08.207966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.356 [2024-12-10 00:15:08.207981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-12-10 00:15:08.217917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.356 [2024-12-10 00:15:08.217974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.356 [2024-12-10 00:15:08.217989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.356 [2024-12-10 00:15:08.217996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.356 [2024-12-10 00:15:08.218003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.356 [2024-12-10 00:15:08.218018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-12-10 00:15:08.227946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.356 [2024-12-10 00:15:08.228016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.356 [2024-12-10 00:15:08.228032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.356 [2024-12-10 00:15:08.228040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.356 [2024-12-10 00:15:08.228046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.356 [2024-12-10 00:15:08.228061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-12-10 00:15:08.237974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.356 [2024-12-10 00:15:08.238035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.356 [2024-12-10 00:15:08.238050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.356 [2024-12-10 00:15:08.238057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.356 [2024-12-10 00:15:08.238064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.356 [2024-12-10 00:15:08.238079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-12-10 00:15:08.248007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.356 [2024-12-10 00:15:08.248063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.356 [2024-12-10 00:15:08.248078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.356 [2024-12-10 00:15:08.248085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.356 [2024-12-10 00:15:08.248092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.356 [2024-12-10 00:15:08.248107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-12-10 00:15:08.258023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.356 [2024-12-10 00:15:08.258082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.356 [2024-12-10 00:15:08.258099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.356 [2024-12-10 00:15:08.258108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.356 [2024-12-10 00:15:08.258115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.356 [2024-12-10 00:15:08.258131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-12-10 00:15:08.268057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.356 [2024-12-10 00:15:08.268122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.356 [2024-12-10 00:15:08.268138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.356 [2024-12-10 00:15:08.268145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.356 [2024-12-10 00:15:08.268151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.356 [2024-12-10 00:15:08.268173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.356 [2024-12-10 00:15:08.278092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.356 [2024-12-10 00:15:08.278154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.356 [2024-12-10 00:15:08.278177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.356 [2024-12-10 00:15:08.278186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.356 [2024-12-10 00:15:08.278193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.356 [2024-12-10 00:15:08.278209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.356 qpair failed and we were unable to recover it. 00:33:33.635 [2024-12-10 00:15:08.288119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.635 [2024-12-10 00:15:08.288181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.635 [2024-12-10 00:15:08.288195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.635 [2024-12-10 00:15:08.288203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.635 [2024-12-10 00:15:08.288209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.635 [2024-12-10 00:15:08.288225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.635 qpair failed and we were unable to recover it. 00:33:33.635 [2024-12-10 00:15:08.298161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.635 [2024-12-10 00:15:08.298238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.635 [2024-12-10 00:15:08.298257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.635 [2024-12-10 00:15:08.298265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.635 [2024-12-10 00:15:08.298271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.635 [2024-12-10 00:15:08.298287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.635 qpair failed and we were unable to recover it. 00:33:33.635 [2024-12-10 00:15:08.308187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.635 [2024-12-10 00:15:08.308247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.635 [2024-12-10 00:15:08.308261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.635 [2024-12-10 00:15:08.308268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.635 [2024-12-10 00:15:08.308275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.635 [2024-12-10 00:15:08.308290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.635 qpair failed and we were unable to recover it. 00:33:33.635 [2024-12-10 00:15:08.318218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.635 [2024-12-10 00:15:08.318277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.635 [2024-12-10 00:15:08.318292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.635 [2024-12-10 00:15:08.318299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.635 [2024-12-10 00:15:08.318305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.635 [2024-12-10 00:15:08.318320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.635 qpair failed and we were unable to recover it. 00:33:33.635 [2024-12-10 00:15:08.328230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.635 [2024-12-10 00:15:08.328287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.635 [2024-12-10 00:15:08.328301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.635 [2024-12-10 00:15:08.328308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.635 [2024-12-10 00:15:08.328316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.635 [2024-12-10 00:15:08.328332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.635 qpair failed and we were unable to recover it. 00:33:33.635 [2024-12-10 00:15:08.338249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.635 [2024-12-10 00:15:08.338335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.635 [2024-12-10 00:15:08.338350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.635 [2024-12-10 00:15:08.338359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.635 [2024-12-10 00:15:08.338366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.635 [2024-12-10 00:15:08.338381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.635 qpair failed and we were unable to recover it. 00:33:33.635 [2024-12-10 00:15:08.348337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.635 [2024-12-10 00:15:08.348397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.635 [2024-12-10 00:15:08.348411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.348419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.348426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.348441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.358349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.358407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.358421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.358428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.358435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.358451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.368345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.368394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.368408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.368415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.368422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.368437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.378372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.378426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.378440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.378446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.378453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.378472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.388409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.388463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.388477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.388484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.388490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.388506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.398438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.398491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.398506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.398513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.398519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.398534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.408471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.408524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.408539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.408546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.408553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.408569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.418478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.418533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.418547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.418554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.418561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.418577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.428506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.428571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.428585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.428592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.428599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.428613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.438572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.438641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.438656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.438663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.438670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.438685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.448613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.448664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.448678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.448684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.448691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.448707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.458612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.458665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.458679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.458686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.458693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.458708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.468638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.468697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.468711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.468721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.468728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.636 [2024-12-10 00:15:08.468743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.636 qpair failed and we were unable to recover it. 00:33:33.636 [2024-12-10 00:15:08.478655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.636 [2024-12-10 00:15:08.478717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.636 [2024-12-10 00:15:08.478731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.636 [2024-12-10 00:15:08.478739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.636 [2024-12-10 00:15:08.478745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.637 [2024-12-10 00:15:08.478761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.637 qpair failed and we were unable to recover it. 00:33:33.637 [2024-12-10 00:15:08.488738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.637 [2024-12-10 00:15:08.488798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.637 [2024-12-10 00:15:08.488813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.637 [2024-12-10 00:15:08.488820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.637 [2024-12-10 00:15:08.488827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.637 [2024-12-10 00:15:08.488842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.637 qpair failed and we were unable to recover it. 00:33:33.637 [2024-12-10 00:15:08.498705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.637 [2024-12-10 00:15:08.498756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.637 [2024-12-10 00:15:08.498770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.637 [2024-12-10 00:15:08.498777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.637 [2024-12-10 00:15:08.498784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.637 [2024-12-10 00:15:08.498799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.637 qpair failed and we were unable to recover it. 00:33:33.637 [2024-12-10 00:15:08.508742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.637 [2024-12-10 00:15:08.508802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.637 [2024-12-10 00:15:08.508816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.637 [2024-12-10 00:15:08.508823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.637 [2024-12-10 00:15:08.508829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.637 [2024-12-10 00:15:08.508848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.637 qpair failed and we were unable to recover it. 00:33:33.637 [2024-12-10 00:15:08.518783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.637 [2024-12-10 00:15:08.518843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.637 [2024-12-10 00:15:08.518857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.637 [2024-12-10 00:15:08.518864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.637 [2024-12-10 00:15:08.518870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.637 [2024-12-10 00:15:08.518886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.637 qpair failed and we were unable to recover it. 00:33:33.637 [2024-12-10 00:15:08.528793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.637 [2024-12-10 00:15:08.528848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.637 [2024-12-10 00:15:08.528862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.637 [2024-12-10 00:15:08.528869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.637 [2024-12-10 00:15:08.528876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.637 [2024-12-10 00:15:08.528892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.637 qpair failed and we were unable to recover it. 00:33:33.637 [2024-12-10 00:15:08.538859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.637 [2024-12-10 00:15:08.538918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.637 [2024-12-10 00:15:08.538933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.637 [2024-12-10 00:15:08.538940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.637 [2024-12-10 00:15:08.538946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.637 [2024-12-10 00:15:08.538961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.637 qpair failed and we were unable to recover it. 00:33:33.637 [2024-12-10 00:15:08.548897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.637 [2024-12-10 00:15:08.548955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.637 [2024-12-10 00:15:08.548969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.637 [2024-12-10 00:15:08.548976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.637 [2024-12-10 00:15:08.548983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.637 [2024-12-10 00:15:08.548998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.637 qpair failed and we were unable to recover it. 00:33:33.637 [2024-12-10 00:15:08.558901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.637 [2024-12-10 00:15:08.558957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.637 [2024-12-10 00:15:08.558972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.637 [2024-12-10 00:15:08.558979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.637 [2024-12-10 00:15:08.558986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.637 [2024-12-10 00:15:08.559001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.637 qpair failed and we were unable to recover it. 00:33:33.933 [2024-12-10 00:15:08.568947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.933 [2024-12-10 00:15:08.569011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.933 [2024-12-10 00:15:08.569026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.933 [2024-12-10 00:15:08.569033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.933 [2024-12-10 00:15:08.569039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.933 [2024-12-10 00:15:08.569055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.933 qpair failed and we were unable to recover it. 00:33:33.933 [2024-12-10 00:15:08.578966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.933 [2024-12-10 00:15:08.579025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.933 [2024-12-10 00:15:08.579040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.933 [2024-12-10 00:15:08.579047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.933 [2024-12-10 00:15:08.579054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.933 [2024-12-10 00:15:08.579069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.933 qpair failed and we were unable to recover it. 00:33:33.933 [2024-12-10 00:15:08.588985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.933 [2024-12-10 00:15:08.589049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.933 [2024-12-10 00:15:08.589063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.933 [2024-12-10 00:15:08.589071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.933 [2024-12-10 00:15:08.589077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.933 [2024-12-10 00:15:08.589093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.933 qpair failed and we were unable to recover it. 00:33:33.933 [2024-12-10 00:15:08.599010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.933 [2024-12-10 00:15:08.599066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.933 [2024-12-10 00:15:08.599083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.933 [2024-12-10 00:15:08.599091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.933 [2024-12-10 00:15:08.599097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.933 [2024-12-10 00:15:08.599113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.933 qpair failed and we were unable to recover it. 00:33:33.933 [2024-12-10 00:15:08.609029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.934 [2024-12-10 00:15:08.609084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.934 [2024-12-10 00:15:08.609099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.934 [2024-12-10 00:15:08.609106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.934 [2024-12-10 00:15:08.609113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.934 [2024-12-10 00:15:08.609130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.934 qpair failed and we were unable to recover it. 00:33:33.934 [2024-12-10 00:15:08.618979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.934 [2024-12-10 00:15:08.619032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.934 [2024-12-10 00:15:08.619047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.934 [2024-12-10 00:15:08.619054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.934 [2024-12-10 00:15:08.619060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.934 [2024-12-10 00:15:08.619076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.934 qpair failed and we were unable to recover it. 00:33:33.934 [2024-12-10 00:15:08.629136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.934 [2024-12-10 00:15:08.629206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.934 [2024-12-10 00:15:08.629221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.934 [2024-12-10 00:15:08.629228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.934 [2024-12-10 00:15:08.629235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.934 [2024-12-10 00:15:08.629251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.934 qpair failed and we were unable to recover it. 00:33:33.934 [2024-12-10 00:15:08.639119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.934 [2024-12-10 00:15:08.639198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.934 [2024-12-10 00:15:08.639213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.934 [2024-12-10 00:15:08.639220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.934 [2024-12-10 00:15:08.639230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.934 [2024-12-10 00:15:08.639247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.934 qpair failed and we were unable to recover it. 00:33:33.934 [2024-12-10 00:15:08.649147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.934 [2024-12-10 00:15:08.649204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.934 [2024-12-10 00:15:08.649219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.934 [2024-12-10 00:15:08.649226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.934 [2024-12-10 00:15:08.649232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.934 [2024-12-10 00:15:08.649248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.934 qpair failed and we were unable to recover it. 00:33:33.934 [2024-12-10 00:15:08.659188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.934 [2024-12-10 00:15:08.659247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.934 [2024-12-10 00:15:08.659261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.934 [2024-12-10 00:15:08.659269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.934 [2024-12-10 00:15:08.659275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.934 [2024-12-10 00:15:08.659291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.934 qpair failed and we were unable to recover it. 00:33:33.934 [2024-12-10 00:15:08.669228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.934 [2024-12-10 00:15:08.669289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.934 [2024-12-10 00:15:08.669304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.934 [2024-12-10 00:15:08.669312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.934 [2024-12-10 00:15:08.669318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.934 [2024-12-10 00:15:08.669334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.934 qpair failed and we were unable to recover it. 00:33:33.934 [2024-12-10 00:15:08.679260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.934 [2024-12-10 00:15:08.679318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.934 [2024-12-10 00:15:08.679333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.934 [2024-12-10 00:15:08.679340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.934 [2024-12-10 00:15:08.679347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.934 [2024-12-10 00:15:08.679363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.934 qpair failed and we were unable to recover it. 00:33:33.934 [2024-12-10 00:15:08.689260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.934 [2024-12-10 00:15:08.689319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.934 [2024-12-10 00:15:08.689333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.934 [2024-12-10 00:15:08.689341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.934 [2024-12-10 00:15:08.689348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.934 [2024-12-10 00:15:08.689364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.934 qpair failed and we were unable to recover it. 00:33:33.934 [2024-12-10 00:15:08.699325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.934 [2024-12-10 00:15:08.699439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.934 [2024-12-10 00:15:08.699454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.934 [2024-12-10 00:15:08.699462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.934 [2024-12-10 00:15:08.699468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.934 [2024-12-10 00:15:08.699484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.934 qpair failed and we were unable to recover it. 00:33:33.934 [2024-12-10 00:15:08.709297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.934 [2024-12-10 00:15:08.709356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.934 [2024-12-10 00:15:08.709371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.934 [2024-12-10 00:15:08.709380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.934 [2024-12-10 00:15:08.709387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.934 [2024-12-10 00:15:08.709402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.934 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.719444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.719519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.719534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.719541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.719547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.719563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.729407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.729465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.729483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.729490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.729497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.729512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.739387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.739439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.739453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.739460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.739467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.739483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.749427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.749485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.749499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.749506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.749513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.749528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.759467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.759524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.759538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.759546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.759552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.759567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.769460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.769534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.769549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.769557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.769566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.769581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.779475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.779535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.779558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.779565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.779572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.779592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.789490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.789551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.789566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.789573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.789580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.789595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.799585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.799639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.799653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.799660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.799667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.799682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.809554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.809614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.809628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.809636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.809644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.809659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.819614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.819678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.819693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.819700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.819707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.819723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.829627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.829690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.829704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.829712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.829719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.829734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.839707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.839766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.935 [2024-12-10 00:15:08.839781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.935 [2024-12-10 00:15:08.839788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.935 [2024-12-10 00:15:08.839795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.935 [2024-12-10 00:15:08.839811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.935 qpair failed and we were unable to recover it. 00:33:33.935 [2024-12-10 00:15:08.849726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.935 [2024-12-10 00:15:08.849785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.936 [2024-12-10 00:15:08.849800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.936 [2024-12-10 00:15:08.849807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.936 [2024-12-10 00:15:08.849814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.936 [2024-12-10 00:15:08.849829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.936 qpair failed and we were unable to recover it. 00:33:33.936 [2024-12-10 00:15:08.859714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:33.936 [2024-12-10 00:15:08.859780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:33.936 [2024-12-10 00:15:08.859795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:33.936 [2024-12-10 00:15:08.859802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:33.936 [2024-12-10 00:15:08.859809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:33.936 [2024-12-10 00:15:08.859824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:33.936 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.869788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.869857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.207 [2024-12-10 00:15:08.869871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.207 [2024-12-10 00:15:08.869878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.207 [2024-12-10 00:15:08.869885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.207 [2024-12-10 00:15:08.869901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.207 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.879772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.879837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.207 [2024-12-10 00:15:08.879851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.207 [2024-12-10 00:15:08.879858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.207 [2024-12-10 00:15:08.879864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.207 [2024-12-10 00:15:08.879880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.207 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.889799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.889856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.207 [2024-12-10 00:15:08.889870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.207 [2024-12-10 00:15:08.889877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.207 [2024-12-10 00:15:08.889884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.207 [2024-12-10 00:15:08.889900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.207 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.899858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.899914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.207 [2024-12-10 00:15:08.899928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.207 [2024-12-10 00:15:08.899939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.207 [2024-12-10 00:15:08.899945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.207 [2024-12-10 00:15:08.899961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.207 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.909919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.909981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.207 [2024-12-10 00:15:08.909996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.207 [2024-12-10 00:15:08.910003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.207 [2024-12-10 00:15:08.910009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.207 [2024-12-10 00:15:08.910024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.207 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.919938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.920000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.207 [2024-12-10 00:15:08.920015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.207 [2024-12-10 00:15:08.920022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.207 [2024-12-10 00:15:08.920029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.207 [2024-12-10 00:15:08.920044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.207 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.929947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.930008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.207 [2024-12-10 00:15:08.930023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.207 [2024-12-10 00:15:08.930030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.207 [2024-12-10 00:15:08.930036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.207 [2024-12-10 00:15:08.930052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.207 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.940005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.940062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.207 [2024-12-10 00:15:08.940076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.207 [2024-12-10 00:15:08.940083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.207 [2024-12-10 00:15:08.940090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.207 [2024-12-10 00:15:08.940111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.207 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.950004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.950065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.207 [2024-12-10 00:15:08.950079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.207 [2024-12-10 00:15:08.950087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.207 [2024-12-10 00:15:08.950093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.207 [2024-12-10 00:15:08.950108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.207 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.960019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.960078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.207 [2024-12-10 00:15:08.960092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.207 [2024-12-10 00:15:08.960100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.207 [2024-12-10 00:15:08.960106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.207 [2024-12-10 00:15:08.960122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.207 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.970059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.970110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.207 [2024-12-10 00:15:08.970124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.207 [2024-12-10 00:15:08.970131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.207 [2024-12-10 00:15:08.970138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.207 [2024-12-10 00:15:08.970153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.207 qpair failed and we were unable to recover it. 00:33:34.207 [2024-12-10 00:15:08.980095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.207 [2024-12-10 00:15:08.980149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:08.980170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:08.980177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:08.980184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:08.980200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:08.990167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:08.990231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:08.990246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:08.990253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:08.990259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:08.990276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.000142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.000201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.000215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.000224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:09.000230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:09.000247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.010208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.010260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.010275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.010282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:09.010289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:09.010304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.020191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.020249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.020263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.020270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:09.020277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:09.020292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.030244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.030305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.030319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.030330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:09.030337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:09.030353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.040304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.040372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.040386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.040394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:09.040400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:09.040415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.050330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.050388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.050402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.050409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:09.050416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:09.050431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.060341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.060391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.060405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.060412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:09.060419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:09.060435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.070405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.070467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.070482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.070489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:09.070496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:09.070515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.080375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.080435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.080449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.080456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:09.080463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:09.080478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.090427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.090482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.090496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.090504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:09.090511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:09.090527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.100483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.100542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.100556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.100563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.208 [2024-12-10 00:15:09.100570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.208 [2024-12-10 00:15:09.100585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.208 qpair failed and we were unable to recover it. 00:33:34.208 [2024-12-10 00:15:09.110418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.208 [2024-12-10 00:15:09.110477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.208 [2024-12-10 00:15:09.110491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.208 [2024-12-10 00:15:09.110499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.209 [2024-12-10 00:15:09.110506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.209 [2024-12-10 00:15:09.110521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.209 qpair failed and we were unable to recover it. 00:33:34.209 [2024-12-10 00:15:09.120506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.209 [2024-12-10 00:15:09.120563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.209 [2024-12-10 00:15:09.120577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.209 [2024-12-10 00:15:09.120585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.209 [2024-12-10 00:15:09.120592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.209 [2024-12-10 00:15:09.120608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.209 qpair failed and we were unable to recover it. 00:33:34.209 [2024-12-10 00:15:09.130526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.209 [2024-12-10 00:15:09.130585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.209 [2024-12-10 00:15:09.130600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.209 [2024-12-10 00:15:09.130608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.209 [2024-12-10 00:15:09.130615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.209 [2024-12-10 00:15:09.130630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.209 qpair failed and we were unable to recover it. 00:33:34.482 [2024-12-10 00:15:09.140584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.482 [2024-12-10 00:15:09.140645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.482 [2024-12-10 00:15:09.140659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.482 [2024-12-10 00:15:09.140667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.482 [2024-12-10 00:15:09.140673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.482 [2024-12-10 00:15:09.140688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.482 qpair failed and we were unable to recover it. 00:33:34.482 [2024-12-10 00:15:09.150699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.482 [2024-12-10 00:15:09.150801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.482 [2024-12-10 00:15:09.150816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.482 [2024-12-10 00:15:09.150823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.482 [2024-12-10 00:15:09.150830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.482 [2024-12-10 00:15:09.150847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.482 qpair failed and we were unable to recover it. 00:33:34.482 [2024-12-10 00:15:09.160623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.482 [2024-12-10 00:15:09.160685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.482 [2024-12-10 00:15:09.160704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.482 [2024-12-10 00:15:09.160713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.482 [2024-12-10 00:15:09.160719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.482 [2024-12-10 00:15:09.160737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.482 qpair failed and we were unable to recover it. 00:33:34.482 [2024-12-10 00:15:09.170643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.482 [2024-12-10 00:15:09.170701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.482 [2024-12-10 00:15:09.170716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.482 [2024-12-10 00:15:09.170723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.482 [2024-12-10 00:15:09.170730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.482 [2024-12-10 00:15:09.170745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.482 qpair failed and we were unable to recover it. 00:33:34.482 [2024-12-10 00:15:09.180671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.482 [2024-12-10 00:15:09.180728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.482 [2024-12-10 00:15:09.180742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.482 [2024-12-10 00:15:09.180750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.482 [2024-12-10 00:15:09.180756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.482 [2024-12-10 00:15:09.180772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.482 qpair failed and we were unable to recover it. 00:33:34.482 [2024-12-10 00:15:09.190709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.482 [2024-12-10 00:15:09.190768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.482 [2024-12-10 00:15:09.190782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.482 [2024-12-10 00:15:09.190789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.482 [2024-12-10 00:15:09.190796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.482 [2024-12-10 00:15:09.190811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.482 qpair failed and we were unable to recover it. 00:33:34.482 [2024-12-10 00:15:09.200736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.482 [2024-12-10 00:15:09.200795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.482 [2024-12-10 00:15:09.200809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.482 [2024-12-10 00:15:09.200817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.482 [2024-12-10 00:15:09.200828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.482 [2024-12-10 00:15:09.200844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.482 qpair failed and we were unable to recover it. 00:33:34.482 [2024-12-10 00:15:09.210764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.482 [2024-12-10 00:15:09.210822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.482 [2024-12-10 00:15:09.210836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.482 [2024-12-10 00:15:09.210843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.482 [2024-12-10 00:15:09.210850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.210865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.220782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.220836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.220850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.220858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.220865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.220880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.230826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.230902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.230917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.230925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.230931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.230947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.240853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.240911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.240926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.240934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.240941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.240956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.250921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.250983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.250998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.251005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.251012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.251026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.260909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.260972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.260986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.260994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.261001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.261016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.270930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.270989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.271004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.271012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.271018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.271033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.280959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.281017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.281031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.281039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.281045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.281060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.291008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.291063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.291082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.291090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.291096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.291111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.300941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.300995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.301010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.301017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.301024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.301039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.311061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.311123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.311138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.311145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.311151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.311172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.321080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.321131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.321145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.321152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.321163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.321180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.331110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.331164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.331178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.331185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.331195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.331210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.483 [2024-12-10 00:15:09.341139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.483 [2024-12-10 00:15:09.341193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.483 [2024-12-10 00:15:09.341208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.483 [2024-12-10 00:15:09.341216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.483 [2024-12-10 00:15:09.341223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.483 [2024-12-10 00:15:09.341238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.483 qpair failed and we were unable to recover it. 00:33:34.484 [2024-12-10 00:15:09.351182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.484 [2024-12-10 00:15:09.351251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.484 [2024-12-10 00:15:09.351265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.484 [2024-12-10 00:15:09.351273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.484 [2024-12-10 00:15:09.351279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.484 [2024-12-10 00:15:09.351295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.484 qpair failed and we were unable to recover it. 00:33:34.484 [2024-12-10 00:15:09.361191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.484 [2024-12-10 00:15:09.361253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.484 [2024-12-10 00:15:09.361267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.484 [2024-12-10 00:15:09.361275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.484 [2024-12-10 00:15:09.361281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.484 [2024-12-10 00:15:09.361297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.484 qpair failed and we were unable to recover it. 00:33:34.484 [2024-12-10 00:15:09.371233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.484 [2024-12-10 00:15:09.371301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.484 [2024-12-10 00:15:09.371315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.484 [2024-12-10 00:15:09.371322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.484 [2024-12-10 00:15:09.371328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.484 [2024-12-10 00:15:09.371344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.484 qpair failed and we were unable to recover it. 00:33:34.484 [2024-12-10 00:15:09.381238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.484 [2024-12-10 00:15:09.381293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.484 [2024-12-10 00:15:09.381308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.484 [2024-12-10 00:15:09.381315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.484 [2024-12-10 00:15:09.381322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.484 [2024-12-10 00:15:09.381337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.484 qpair failed and we were unable to recover it. 00:33:34.484 [2024-12-10 00:15:09.391287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.484 [2024-12-10 00:15:09.391353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.484 [2024-12-10 00:15:09.391367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.484 [2024-12-10 00:15:09.391375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.484 [2024-12-10 00:15:09.391381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.484 [2024-12-10 00:15:09.391396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.484 qpair failed and we were unable to recover it. 00:33:34.484 [2024-12-10 00:15:09.401309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.484 [2024-12-10 00:15:09.401362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.484 [2024-12-10 00:15:09.401375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.484 [2024-12-10 00:15:09.401383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.484 [2024-12-10 00:15:09.401389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.484 [2024-12-10 00:15:09.401404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.484 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.411371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.411432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.411445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.411454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.411461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.411476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.421390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.421454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.421468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.421475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.421482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.421498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.431410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.431470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.431484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.431491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.431498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.431513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.441436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.441493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.441507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.441514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.441520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.441536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.451449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.451510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.451524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.451532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.451538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.451554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.461480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.461532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.461546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.461558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.461564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.461580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.471512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.471567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.471581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.471588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.471595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.471611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.481549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.481618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.481632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.481640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.481647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.481662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.491556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.491611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.491625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.491634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.491640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.491656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.501586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.501643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.501656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.501664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.501670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.501689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.511621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.511677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.511691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.511698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.511706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.511722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.521653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.521706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.521720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.521727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.521734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.521750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.754 qpair failed and we were unable to recover it. 00:33:34.754 [2024-12-10 00:15:09.531676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.754 [2024-12-10 00:15:09.531731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.754 [2024-12-10 00:15:09.531745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.754 [2024-12-10 00:15:09.531753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.754 [2024-12-10 00:15:09.531759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.754 [2024-12-10 00:15:09.531775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.541733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.541791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.541805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.541813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.541819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.541834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.551751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.551815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.551830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.551837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.551843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.551859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.561763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.561821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.561835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.561842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.561849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.561865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.571732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.571786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.571800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.571807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.571813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.571829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.581736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.581797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.581813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.581824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.581833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.581851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.591779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.591837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.591852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.591862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.591869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.591884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.601864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.601919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.601933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.601940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.601947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.601962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.611899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.611952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.611966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.611973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.611980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.611996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.621917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.621976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.621990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.621997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.622003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.622019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.631960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.632017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.632030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.632038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.632044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.632063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.641991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.642048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.642063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.642070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.642076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.642091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.651936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.651988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.652002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.652008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.652015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.652031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.661973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.755 [2024-12-10 00:15:09.662025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.755 [2024-12-10 00:15:09.662039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.755 [2024-12-10 00:15:09.662047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.755 [2024-12-10 00:15:09.662053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.755 [2024-12-10 00:15:09.662068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.755 qpair failed and we were unable to recover it. 00:33:34.755 [2024-12-10 00:15:09.672072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.756 [2024-12-10 00:15:09.672128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.756 [2024-12-10 00:15:09.672142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.756 [2024-12-10 00:15:09.672150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.756 [2024-12-10 00:15:09.672156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.756 [2024-12-10 00:15:09.672177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.756 qpair failed and we were unable to recover it. 00:33:34.756 [2024-12-10 00:15:09.682086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:34.756 [2024-12-10 00:15:09.682146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:34.756 [2024-12-10 00:15:09.682165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:34.756 [2024-12-10 00:15:09.682173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:34.756 [2024-12-10 00:15:09.682180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:34.756 [2024-12-10 00:15:09.682195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:34.756 qpair failed and we were unable to recover it. 00:33:35.015 [2024-12-10 00:15:09.692076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.015 [2024-12-10 00:15:09.692137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.015 [2024-12-10 00:15:09.692152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.015 [2024-12-10 00:15:09.692166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.015 [2024-12-10 00:15:09.692177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.015 [2024-12-10 00:15:09.692194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.015 qpair failed and we were unable to recover it. 00:33:35.015 [2024-12-10 00:15:09.702098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.015 [2024-12-10 00:15:09.702163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.015 [2024-12-10 00:15:09.702178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.015 [2024-12-10 00:15:09.702186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.015 [2024-12-10 00:15:09.702192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.015 [2024-12-10 00:15:09.702208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.015 qpair failed and we were unable to recover it. 00:33:35.015 [2024-12-10 00:15:09.712202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.015 [2024-12-10 00:15:09.712260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.015 [2024-12-10 00:15:09.712275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.015 [2024-12-10 00:15:09.712282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.015 [2024-12-10 00:15:09.712288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.015 [2024-12-10 00:15:09.712304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.015 qpair failed and we were unable to recover it. 00:33:35.015 [2024-12-10 00:15:09.722217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.015 [2024-12-10 00:15:09.722281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.015 [2024-12-10 00:15:09.722301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.015 [2024-12-10 00:15:09.722308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.015 [2024-12-10 00:15:09.722314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.015 [2024-12-10 00:15:09.722330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.015 qpair failed and we were unable to recover it. 00:33:35.015 [2024-12-10 00:15:09.732241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.015 [2024-12-10 00:15:09.732299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.015 [2024-12-10 00:15:09.732314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.015 [2024-12-10 00:15:09.732322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.015 [2024-12-10 00:15:09.732328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.015 [2024-12-10 00:15:09.732344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.015 qpair failed and we were unable to recover it. 00:33:35.015 [2024-12-10 00:15:09.742260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.015 [2024-12-10 00:15:09.742315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.015 [2024-12-10 00:15:09.742329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.015 [2024-12-10 00:15:09.742336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.015 [2024-12-10 00:15:09.742343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.015 [2024-12-10 00:15:09.742358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.015 qpair failed and we were unable to recover it. 00:33:35.015 [2024-12-10 00:15:09.752294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.015 [2024-12-10 00:15:09.752350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.015 [2024-12-10 00:15:09.752364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.015 [2024-12-10 00:15:09.752371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.015 [2024-12-10 00:15:09.752378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.015 [2024-12-10 00:15:09.752394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.015 qpair failed and we were unable to recover it. 00:33:35.015 [2024-12-10 00:15:09.762339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.015 [2024-12-10 00:15:09.762393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.015 [2024-12-10 00:15:09.762407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.015 [2024-12-10 00:15:09.762414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.015 [2024-12-10 00:15:09.762423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.762439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.772346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.772427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.772441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.772448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.772455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.772470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.782381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.782434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.782448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.782455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.782461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.782476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.792406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.792476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.792490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.792497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.792504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.792520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.802428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.802488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.802502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.802509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.802516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.802531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.812462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.812527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.812542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.812550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.812556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.812571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.822485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.822538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.822553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.822561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.822568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.822584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.832527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.832586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.832600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.832607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.832614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.832629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.842529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.842586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.842601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.842609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.842616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.842631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.852566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.852631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.852650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.852657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.852663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.852678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.862599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.862655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.862670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.862677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.862683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.862700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.872634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.872700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.872714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.872722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.872728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.872744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.882667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.882722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.882736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.882743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.882750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.882765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.892609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.016 [2024-12-10 00:15:09.892663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.016 [2024-12-10 00:15:09.892677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.016 [2024-12-10 00:15:09.892684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.016 [2024-12-10 00:15:09.892695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.016 [2024-12-10 00:15:09.892711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.016 qpair failed and we were unable to recover it. 00:33:35.016 [2024-12-10 00:15:09.902719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.017 [2024-12-10 00:15:09.902774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.017 [2024-12-10 00:15:09.902789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.017 [2024-12-10 00:15:09.902797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.017 [2024-12-10 00:15:09.902805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.017 [2024-12-10 00:15:09.902820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.017 qpair failed and we were unable to recover it. 00:33:35.017 [2024-12-10 00:15:09.912763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.017 [2024-12-10 00:15:09.912822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.017 [2024-12-10 00:15:09.912836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.017 [2024-12-10 00:15:09.912844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.017 [2024-12-10 00:15:09.912850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.017 [2024-12-10 00:15:09.912867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.017 qpair failed and we were unable to recover it. 00:33:35.017 [2024-12-10 00:15:09.922780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.017 [2024-12-10 00:15:09.922843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.017 [2024-12-10 00:15:09.922857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.017 [2024-12-10 00:15:09.922864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.017 [2024-12-10 00:15:09.922870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.017 [2024-12-10 00:15:09.922886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.017 qpair failed and we were unable to recover it. 00:33:35.017 [2024-12-10 00:15:09.932734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.017 [2024-12-10 00:15:09.932789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.017 [2024-12-10 00:15:09.932803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.017 [2024-12-10 00:15:09.932810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.017 [2024-12-10 00:15:09.932817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.017 [2024-12-10 00:15:09.932832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.017 qpair failed and we were unable to recover it. 00:33:35.017 [2024-12-10 00:15:09.942773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.017 [2024-12-10 00:15:09.942832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.017 [2024-12-10 00:15:09.942846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.017 [2024-12-10 00:15:09.942853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.017 [2024-12-10 00:15:09.942859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.017 [2024-12-10 00:15:09.942874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.017 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:09.952899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:09.952958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:09.952972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:09.952979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:09.952986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:09.953001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:09.962904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:09.962963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:09.962978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:09.962985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:09.962991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:09.963006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:09.972901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:09.972960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:09.972974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:09.972981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:09.972987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:09.973002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:09.982936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:09.982999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:09.983014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:09.983021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:09.983027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:09.983043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:09.992995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:09.993056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:09.993070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:09.993077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:09.993084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:09.993100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:10.003017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:10.003076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:10.003090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:10.003099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:10.003105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:10.003121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:10.013034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:10.013091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:10.013105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:10.013114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:10.013120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:10.013135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:10.023125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:10.023193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:10.023210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:10.023220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:10.023227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:10.023244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:10.033121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:10.033201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:10.033217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:10.033224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:10.033231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:10.033247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:10.043146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:10.043213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:10.043227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:10.043234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:10.043241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:10.043257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:10.053179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:10.053234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:10.053249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:10.053257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:10.053263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:10.053278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:10.063237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:10.063341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:10.063358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:10.063366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:10.063373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:10.063393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:10.073209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:10.073268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.277 [2024-12-10 00:15:10.073283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.277 [2024-12-10 00:15:10.073291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.277 [2024-12-10 00:15:10.073297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.277 [2024-12-10 00:15:10.073313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.277 qpair failed and we were unable to recover it. 00:33:35.277 [2024-12-10 00:15:10.083257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.277 [2024-12-10 00:15:10.083318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.083333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.083341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.083348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.083363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.093269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.093324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.093339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.093346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.093353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.093368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.103274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.103329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.103343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.103350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.103357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.103374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.113285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.113351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.113366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.113374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.113380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.113395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.123336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.123391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.123406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.123412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.123419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.123434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.133293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.133354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.133368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.133376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.133382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.133398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.143329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.143418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.143433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.143440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.143446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.143461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.153363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.153421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.153439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.153446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.153453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.153469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.163383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.163441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.163455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.163462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.163469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.163485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.173481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.173555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.173569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.173577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.173583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.173599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.183527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.183587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.183602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.183609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.183616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.183631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.193541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.193594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.193609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.193616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.193622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.193641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.278 [2024-12-10 00:15:10.203581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.278 [2024-12-10 00:15:10.203641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.278 [2024-12-10 00:15:10.203656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.278 [2024-12-10 00:15:10.203663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.278 [2024-12-10 00:15:10.203669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.278 [2024-12-10 00:15:10.203684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.278 qpair failed and we were unable to recover it. 00:33:35.538 [2024-12-10 00:15:10.213611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.538 [2024-12-10 00:15:10.213679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.538 [2024-12-10 00:15:10.213694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.538 [2024-12-10 00:15:10.213702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.538 [2024-12-10 00:15:10.213708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.538 [2024-12-10 00:15:10.213723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.538 qpair failed and we were unable to recover it. 00:33:35.538 [2024-12-10 00:15:10.223582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.538 [2024-12-10 00:15:10.223643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.223657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.223664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.223671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.223687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.233702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.233757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.233771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.233778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.233784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.233799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.243695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.243769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.243784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.243790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.243797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.243813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.253728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.253795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.253809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.253817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.253823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.253838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.263735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.263789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.263804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.263811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.263817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.263833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.273734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.273806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.273820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.273827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.273833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.273849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.283740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.283801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.283818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.283827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.283833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.283849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.293832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.293881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.293895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.293902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.293908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.293923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.303873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.303938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.303952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.303960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.303966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.303982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.313951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.314016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.314030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.314037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.314043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.314059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.323881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.323976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.323990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.323999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.324009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.324025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.333961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.334019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.334034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.334041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.334048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.334063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.343986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.344046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.344059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.539 [2024-12-10 00:15:10.344067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.539 [2024-12-10 00:15:10.344073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.539 [2024-12-10 00:15:10.344088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.539 qpair failed and we were unable to recover it. 00:33:35.539 [2024-12-10 00:15:10.353952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.539 [2024-12-10 00:15:10.354014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.539 [2024-12-10 00:15:10.354028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.354035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.354041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.354057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.540 [2024-12-10 00:15:10.363969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.540 [2024-12-10 00:15:10.364025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.540 [2024-12-10 00:15:10.364040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.364047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.364053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.364069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.540 [2024-12-10 00:15:10.374056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.540 [2024-12-10 00:15:10.374111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.540 [2024-12-10 00:15:10.374125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.374132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.374139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.374155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.540 [2024-12-10 00:15:10.384118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.540 [2024-12-10 00:15:10.384173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.540 [2024-12-10 00:15:10.384189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.384199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.384206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.384223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.540 [2024-12-10 00:15:10.394128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.540 [2024-12-10 00:15:10.394192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.540 [2024-12-10 00:15:10.394208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.394215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.394221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.394237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.540 [2024-12-10 00:15:10.404152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.540 [2024-12-10 00:15:10.404214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.540 [2024-12-10 00:15:10.404228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.404235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.404241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.404256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.540 [2024-12-10 00:15:10.414192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.540 [2024-12-10 00:15:10.414250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.540 [2024-12-10 00:15:10.414268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.414276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.414282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.414298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.540 [2024-12-10 00:15:10.424202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.540 [2024-12-10 00:15:10.424255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.540 [2024-12-10 00:15:10.424269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.424276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.424283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.424298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.540 [2024-12-10 00:15:10.434242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.540 [2024-12-10 00:15:10.434302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.540 [2024-12-10 00:15:10.434317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.434325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.434332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.434347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.540 [2024-12-10 00:15:10.444298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.540 [2024-12-10 00:15:10.444356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.540 [2024-12-10 00:15:10.444370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.444377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.444384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.444399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.540 [2024-12-10 00:15:10.454333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.540 [2024-12-10 00:15:10.454391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.540 [2024-12-10 00:15:10.454405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.454418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.454424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.454440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.540 [2024-12-10 00:15:10.464364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.540 [2024-12-10 00:15:10.464416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.540 [2024-12-10 00:15:10.464431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.540 [2024-12-10 00:15:10.464437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.540 [2024-12-10 00:15:10.464444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.540 [2024-12-10 00:15:10.464459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.540 qpair failed and we were unable to recover it. 00:33:35.802 [2024-12-10 00:15:10.474470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.802 [2024-12-10 00:15:10.474553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.802 [2024-12-10 00:15:10.474567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.802 [2024-12-10 00:15:10.474575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.802 [2024-12-10 00:15:10.474581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.802 [2024-12-10 00:15:10.474597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.802 qpair failed and we were unable to recover it. 00:33:35.803 [2024-12-10 00:15:10.484442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.803 [2024-12-10 00:15:10.484505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.803 [2024-12-10 00:15:10.484520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.803 [2024-12-10 00:15:10.484526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.803 [2024-12-10 00:15:10.484533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.803 [2024-12-10 00:15:10.484549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.803 qpair failed and we were unable to recover it. 00:33:35.803 [2024-12-10 00:15:10.494393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.803 [2024-12-10 00:15:10.494452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.803 [2024-12-10 00:15:10.494466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.803 [2024-12-10 00:15:10.494474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.803 [2024-12-10 00:15:10.494480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.803 [2024-12-10 00:15:10.494496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.803 qpair failed and we were unable to recover it. 00:33:35.803 [2024-12-10 00:15:10.504432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.803 [2024-12-10 00:15:10.504485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.803 [2024-12-10 00:15:10.504500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.803 [2024-12-10 00:15:10.504506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.803 [2024-12-10 00:15:10.504513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.803 [2024-12-10 00:15:10.504529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.803 qpair failed and we were unable to recover it. 00:33:35.803 [2024-12-10 00:15:10.514453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.803 [2024-12-10 00:15:10.514517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.803 [2024-12-10 00:15:10.514531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.803 [2024-12-10 00:15:10.514538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.803 [2024-12-10 00:15:10.514544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.803 [2024-12-10 00:15:10.514560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.803 qpair failed and we were unable to recover it. 00:33:35.803 [2024-12-10 00:15:10.524455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.803 [2024-12-10 00:15:10.524513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.803 [2024-12-10 00:15:10.524527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.803 [2024-12-10 00:15:10.524534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.803 [2024-12-10 00:15:10.524541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.803 [2024-12-10 00:15:10.524556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.803 qpair failed and we were unable to recover it. 00:33:35.803 [2024-12-10 00:15:10.534480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.803 [2024-12-10 00:15:10.534533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.803 [2024-12-10 00:15:10.534547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.803 [2024-12-10 00:15:10.534554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.803 [2024-12-10 00:15:10.534561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.803 [2024-12-10 00:15:10.534576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.803 qpair failed and we were unable to recover it. 00:33:35.803 [2024-12-10 00:15:10.544501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.803 [2024-12-10 00:15:10.544561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.803 [2024-12-10 00:15:10.544575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.803 [2024-12-10 00:15:10.544583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.803 [2024-12-10 00:15:10.544589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.803 [2024-12-10 00:15:10.544605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.803 qpair failed and we were unable to recover it. 00:33:35.803 [2024-12-10 00:15:10.554603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.803 [2024-12-10 00:15:10.554659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.803 [2024-12-10 00:15:10.554674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.803 [2024-12-10 00:15:10.554681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.803 [2024-12-10 00:15:10.554687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.803 [2024-12-10 00:15:10.554702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.803 qpair failed and we were unable to recover it. 00:33:35.803 [2024-12-10 00:15:10.564624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.803 [2024-12-10 00:15:10.564679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.803 [2024-12-10 00:15:10.564693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.803 [2024-12-10 00:15:10.564700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.803 [2024-12-10 00:15:10.564707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.803 [2024-12-10 00:15:10.564722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.803 qpair failed and we were unable to recover it. 00:33:35.803 [2024-12-10 00:15:10.574657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.804 [2024-12-10 00:15:10.574715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.804 [2024-12-10 00:15:10.574729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.804 [2024-12-10 00:15:10.574737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.804 [2024-12-10 00:15:10.574743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.804 [2024-12-10 00:15:10.574758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.804 qpair failed and we were unable to recover it. 00:33:35.804 [2024-12-10 00:15:10.584693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.804 [2024-12-10 00:15:10.584745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.804 [2024-12-10 00:15:10.584760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.804 [2024-12-10 00:15:10.584771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.804 [2024-12-10 00:15:10.584778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.804 [2024-12-10 00:15:10.584794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.804 qpair failed and we were unable to recover it. 00:33:35.804 [2024-12-10 00:15:10.594704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.804 [2024-12-10 00:15:10.594763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.804 [2024-12-10 00:15:10.594778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.804 [2024-12-10 00:15:10.594785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.804 [2024-12-10 00:15:10.594792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.804 [2024-12-10 00:15:10.594808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.804 qpair failed and we were unable to recover it. 00:33:35.804 [2024-12-10 00:15:10.604746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.804 [2024-12-10 00:15:10.604804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.804 [2024-12-10 00:15:10.604818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.804 [2024-12-10 00:15:10.604825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.804 [2024-12-10 00:15:10.604831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.804 [2024-12-10 00:15:10.604847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.804 qpair failed and we were unable to recover it. 00:33:35.804 [2024-12-10 00:15:10.614770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.804 [2024-12-10 00:15:10.614828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.804 [2024-12-10 00:15:10.614842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.804 [2024-12-10 00:15:10.614849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.804 [2024-12-10 00:15:10.614857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.804 [2024-12-10 00:15:10.614872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.804 qpair failed and we were unable to recover it. 00:33:35.804 [2024-12-10 00:15:10.624802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.804 [2024-12-10 00:15:10.624860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.804 [2024-12-10 00:15:10.624875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.804 [2024-12-10 00:15:10.624882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.804 [2024-12-10 00:15:10.624889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.804 [2024-12-10 00:15:10.624908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.804 qpair failed and we were unable to recover it. 00:33:35.804 [2024-12-10 00:15:10.634823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.804 [2024-12-10 00:15:10.634878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.804 [2024-12-10 00:15:10.634892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.804 [2024-12-10 00:15:10.634900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.804 [2024-12-10 00:15:10.634906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.804 [2024-12-10 00:15:10.634922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.804 qpair failed and we were unable to recover it. 00:33:35.804 [2024-12-10 00:15:10.644889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.804 [2024-12-10 00:15:10.644948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.804 [2024-12-10 00:15:10.644962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.804 [2024-12-10 00:15:10.644970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.804 [2024-12-10 00:15:10.644977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.804 [2024-12-10 00:15:10.644992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.804 qpair failed and we were unable to recover it. 00:33:35.804 [2024-12-10 00:15:10.654906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.804 [2024-12-10 00:15:10.654956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.804 [2024-12-10 00:15:10.654970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.804 [2024-12-10 00:15:10.654977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.804 [2024-12-10 00:15:10.654984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.804 [2024-12-10 00:15:10.655000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.804 qpair failed and we were unable to recover it. 00:33:35.805 [2024-12-10 00:15:10.664904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.805 [2024-12-10 00:15:10.664959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.805 [2024-12-10 00:15:10.664973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.805 [2024-12-10 00:15:10.664981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.805 [2024-12-10 00:15:10.664988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.805 [2024-12-10 00:15:10.665004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.805 qpair failed and we were unable to recover it. 00:33:35.805 [2024-12-10 00:15:10.674949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.805 [2024-12-10 00:15:10.675010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.805 [2024-12-10 00:15:10.675025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.805 [2024-12-10 00:15:10.675032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.805 [2024-12-10 00:15:10.675038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.805 [2024-12-10 00:15:10.675054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.805 qpair failed and we were unable to recover it. 00:33:35.805 [2024-12-10 00:15:10.684970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.805 [2024-12-10 00:15:10.685056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.805 [2024-12-10 00:15:10.685071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.805 [2024-12-10 00:15:10.685078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.805 [2024-12-10 00:15:10.685085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.805 [2024-12-10 00:15:10.685100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.805 qpair failed and we were unable to recover it. 00:33:35.805 [2024-12-10 00:15:10.694998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.805 [2024-12-10 00:15:10.695053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.805 [2024-12-10 00:15:10.695067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.805 [2024-12-10 00:15:10.695074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.805 [2024-12-10 00:15:10.695081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.805 [2024-12-10 00:15:10.695096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.805 qpair failed and we were unable to recover it. 00:33:35.805 [2024-12-10 00:15:10.705027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.805 [2024-12-10 00:15:10.705083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.805 [2024-12-10 00:15:10.705098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.805 [2024-12-10 00:15:10.705106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.805 [2024-12-10 00:15:10.705112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.805 [2024-12-10 00:15:10.705127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.805 qpair failed and we were unable to recover it. 00:33:35.805 [2024-12-10 00:15:10.715071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.805 [2024-12-10 00:15:10.715131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.805 [2024-12-10 00:15:10.715149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.805 [2024-12-10 00:15:10.715162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.805 [2024-12-10 00:15:10.715173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.805 [2024-12-10 00:15:10.715190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.805 qpair failed and we were unable to recover it. 00:33:35.805 [2024-12-10 00:15:10.725091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:35.805 [2024-12-10 00:15:10.725146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:35.805 [2024-12-10 00:15:10.725164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:35.805 [2024-12-10 00:15:10.725171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:35.805 [2024-12-10 00:15:10.725178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:35.805 [2024-12-10 00:15:10.725193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:35.805 qpair failed and we were unable to recover it. 00:33:36.067 [2024-12-10 00:15:10.735183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.067 [2024-12-10 00:15:10.735248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.067 [2024-12-10 00:15:10.735263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.067 [2024-12-10 00:15:10.735270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.067 [2024-12-10 00:15:10.735277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.067 [2024-12-10 00:15:10.735292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.067 qpair failed and we were unable to recover it. 00:33:36.067 [2024-12-10 00:15:10.745173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.067 [2024-12-10 00:15:10.745232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.067 [2024-12-10 00:15:10.745246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.067 [2024-12-10 00:15:10.745254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.067 [2024-12-10 00:15:10.745260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.067 [2024-12-10 00:15:10.745276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.067 qpair failed and we were unable to recover it. 00:33:36.067 [2024-12-10 00:15:10.755233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.067 [2024-12-10 00:15:10.755289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.067 [2024-12-10 00:15:10.755304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.067 [2024-12-10 00:15:10.755311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.067 [2024-12-10 00:15:10.755317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.067 [2024-12-10 00:15:10.755336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.067 qpair failed and we were unable to recover it. 00:33:36.067 [2024-12-10 00:15:10.765208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.067 [2024-12-10 00:15:10.765262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.067 [2024-12-10 00:15:10.765277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.067 [2024-12-10 00:15:10.765284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.067 [2024-12-10 00:15:10.765291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.067 [2024-12-10 00:15:10.765307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.067 qpair failed and we were unable to recover it. 00:33:36.067 [2024-12-10 00:15:10.775230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.067 [2024-12-10 00:15:10.775328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.067 [2024-12-10 00:15:10.775342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.067 [2024-12-10 00:15:10.775350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.067 [2024-12-10 00:15:10.775356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.067 [2024-12-10 00:15:10.775372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.067 qpair failed and we were unable to recover it. 00:33:36.067 [2024-12-10 00:15:10.785256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.067 [2024-12-10 00:15:10.785314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.067 [2024-12-10 00:15:10.785328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.067 [2024-12-10 00:15:10.785335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.067 [2024-12-10 00:15:10.785342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.067 [2024-12-10 00:15:10.785357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.067 qpair failed and we were unable to recover it. 00:33:36.067 [2024-12-10 00:15:10.795329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.067 [2024-12-10 00:15:10.795386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.067 [2024-12-10 00:15:10.795400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.067 [2024-12-10 00:15:10.795408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.067 [2024-12-10 00:15:10.795415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.067 [2024-12-10 00:15:10.795431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.067 qpair failed and we were unable to recover it. 00:33:36.067 [2024-12-10 00:15:10.805319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.067 [2024-12-10 00:15:10.805376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.067 [2024-12-10 00:15:10.805390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.067 [2024-12-10 00:15:10.805397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.067 [2024-12-10 00:15:10.805404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.067 [2024-12-10 00:15:10.805420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.067 qpair failed and we were unable to recover it. 00:33:36.067 [2024-12-10 00:15:10.815329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.067 [2024-12-10 00:15:10.815392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.067 [2024-12-10 00:15:10.815408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.067 [2024-12-10 00:15:10.815415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.067 [2024-12-10 00:15:10.815422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.067 [2024-12-10 00:15:10.815437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.067 qpair failed and we were unable to recover it. 00:33:36.068 [2024-12-10 00:15:10.825376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.068 [2024-12-10 00:15:10.825426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.068 [2024-12-10 00:15:10.825441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.068 [2024-12-10 00:15:10.825448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.068 [2024-12-10 00:15:10.825454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.068 [2024-12-10 00:15:10.825470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.068 qpair failed and we were unable to recover it. 00:33:36.068 [2024-12-10 00:15:10.835402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.068 [2024-12-10 00:15:10.835468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.068 [2024-12-10 00:15:10.835483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.068 [2024-12-10 00:15:10.835490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.068 [2024-12-10 00:15:10.835497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.068 [2024-12-10 00:15:10.835512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.068 qpair failed and we were unable to recover it. 00:33:36.068 [2024-12-10 00:15:10.845428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.068 [2024-12-10 00:15:10.845486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.068 [2024-12-10 00:15:10.845504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.068 [2024-12-10 00:15:10.845511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.068 [2024-12-10 00:15:10.845518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.068 [2024-12-10 00:15:10.845534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.068 qpair failed and we were unable to recover it. 00:33:36.068 [2024-12-10 00:15:10.855455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.068 [2024-12-10 00:15:10.855509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.068 [2024-12-10 00:15:10.855523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.068 [2024-12-10 00:15:10.855530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.068 [2024-12-10 00:15:10.855536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.068 [2024-12-10 00:15:10.855552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.068 qpair failed and we were unable to recover it. 00:33:36.068 [2024-12-10 00:15:10.865478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.068 [2024-12-10 00:15:10.865534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.068 [2024-12-10 00:15:10.865548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.068 [2024-12-10 00:15:10.865557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.068 [2024-12-10 00:15:10.865563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.068 [2024-12-10 00:15:10.865579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.068 qpair failed and we were unable to recover it. 00:33:36.068 [2024-12-10 00:15:10.875533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.068 [2024-12-10 00:15:10.875609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.068 [2024-12-10 00:15:10.875623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.068 [2024-12-10 00:15:10.875631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.068 [2024-12-10 00:15:10.875637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.068 [2024-12-10 00:15:10.875653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.068 qpair failed and we were unable to recover it. 00:33:36.068 [2024-12-10 00:15:10.885572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.068 [2024-12-10 00:15:10.885638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.068 [2024-12-10 00:15:10.885653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.068 [2024-12-10 00:15:10.885660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.068 [2024-12-10 00:15:10.885669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.068 [2024-12-10 00:15:10.885685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.068 qpair failed and we were unable to recover it. 00:33:36.068 [2024-12-10 00:15:10.895566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.068 [2024-12-10 00:15:10.895625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.068 [2024-12-10 00:15:10.895640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.068 [2024-12-10 00:15:10.895647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.068 [2024-12-10 00:15:10.895654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.068 [2024-12-10 00:15:10.895669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.068 qpair failed and we were unable to recover it. 00:33:36.068 [2024-12-10 00:15:10.905533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.068 [2024-12-10 00:15:10.905590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.068 [2024-12-10 00:15:10.905605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.068 [2024-12-10 00:15:10.905613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.068 [2024-12-10 00:15:10.905619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.068 [2024-12-10 00:15:10.905635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.068 qpair failed and we were unable to recover it. 00:33:36.068 [2024-12-10 00:15:10.915634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.068 [2024-12-10 00:15:10.915689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.069 [2024-12-10 00:15:10.915704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.069 [2024-12-10 00:15:10.915711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.069 [2024-12-10 00:15:10.915718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.069 [2024-12-10 00:15:10.915733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.069 qpair failed and we were unable to recover it. 00:33:36.069 [2024-12-10 00:15:10.925643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.069 [2024-12-10 00:15:10.925702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.069 [2024-12-10 00:15:10.925715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.069 [2024-12-10 00:15:10.925723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.069 [2024-12-10 00:15:10.925729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.069 [2024-12-10 00:15:10.925745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.069 qpair failed and we were unable to recover it. 00:33:36.069 [2024-12-10 00:15:10.935672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.069 [2024-12-10 00:15:10.935722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.069 [2024-12-10 00:15:10.935736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.069 [2024-12-10 00:15:10.935743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.069 [2024-12-10 00:15:10.935750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.069 [2024-12-10 00:15:10.935766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.069 qpair failed and we were unable to recover it. 00:33:36.069 [2024-12-10 00:15:10.945710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.069 [2024-12-10 00:15:10.945761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.069 [2024-12-10 00:15:10.945775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.069 [2024-12-10 00:15:10.945782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.069 [2024-12-10 00:15:10.945789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.069 [2024-12-10 00:15:10.945804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.069 qpair failed and we were unable to recover it. 00:33:36.069 [2024-12-10 00:15:10.955690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.069 [2024-12-10 00:15:10.955747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.069 [2024-12-10 00:15:10.955761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.069 [2024-12-10 00:15:10.955768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.069 [2024-12-10 00:15:10.955775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.069 [2024-12-10 00:15:10.955790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.069 qpair failed and we were unable to recover it. 00:33:36.069 [2024-12-10 00:15:10.965781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.069 [2024-12-10 00:15:10.965841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.069 [2024-12-10 00:15:10.965856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.069 [2024-12-10 00:15:10.965864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.069 [2024-12-10 00:15:10.965870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.069 [2024-12-10 00:15:10.965886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.069 qpair failed and we were unable to recover it. 00:33:36.069 [2024-12-10 00:15:10.975793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.069 [2024-12-10 00:15:10.975853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.069 [2024-12-10 00:15:10.975871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.069 [2024-12-10 00:15:10.975880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.069 [2024-12-10 00:15:10.975886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.069 [2024-12-10 00:15:10.975902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.069 qpair failed and we were unable to recover it. 00:33:36.069 [2024-12-10 00:15:10.985758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.069 [2024-12-10 00:15:10.985841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.069 [2024-12-10 00:15:10.985856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.069 [2024-12-10 00:15:10.985863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.069 [2024-12-10 00:15:10.985870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.069 [2024-12-10 00:15:10.985886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.069 qpair failed and we were unable to recover it. 00:33:36.069 [2024-12-10 00:15:10.995823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.069 [2024-12-10 00:15:10.995889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.069 [2024-12-10 00:15:10.995903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.069 [2024-12-10 00:15:10.995911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.069 [2024-12-10 00:15:10.995918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.069 [2024-12-10 00:15:10.995933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.069 qpair failed and we were unable to recover it. 00:33:36.329 [2024-12-10 00:15:11.005892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.329 [2024-12-10 00:15:11.005966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.329 [2024-12-10 00:15:11.005981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.329 [2024-12-10 00:15:11.005989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.329 [2024-12-10 00:15:11.005995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.329 [2024-12-10 00:15:11.006012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-12-10 00:15:11.015857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.329 [2024-12-10 00:15:11.015914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.329 [2024-12-10 00:15:11.015928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.329 [2024-12-10 00:15:11.015939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.329 [2024-12-10 00:15:11.015946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.329 [2024-12-10 00:15:11.015962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-12-10 00:15:11.025939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.329 [2024-12-10 00:15:11.025993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.329 [2024-12-10 00:15:11.026008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.329 [2024-12-10 00:15:11.026015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.329 [2024-12-10 00:15:11.026021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.329 [2024-12-10 00:15:11.026037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-12-10 00:15:11.035953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.329 [2024-12-10 00:15:11.036017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.329 [2024-12-10 00:15:11.036031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.329 [2024-12-10 00:15:11.036039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.329 [2024-12-10 00:15:11.036045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.329 [2024-12-10 00:15:11.036061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-12-10 00:15:11.046001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.329 [2024-12-10 00:15:11.046064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.329 [2024-12-10 00:15:11.046077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.329 [2024-12-10 00:15:11.046085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.329 [2024-12-10 00:15:11.046091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.329 [2024-12-10 00:15:11.046107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-12-10 00:15:11.056027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.329 [2024-12-10 00:15:11.056083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.329 [2024-12-10 00:15:11.056097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.329 [2024-12-10 00:15:11.056105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.329 [2024-12-10 00:15:11.056111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.329 [2024-12-10 00:15:11.056127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-12-10 00:15:11.065980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.329 [2024-12-10 00:15:11.066037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.329 [2024-12-10 00:15:11.066052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.329 [2024-12-10 00:15:11.066059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.329 [2024-12-10 00:15:11.066066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.329 [2024-12-10 00:15:11.066082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.329 [2024-12-10 00:15:11.076091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.329 [2024-12-10 00:15:11.076148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.329 [2024-12-10 00:15:11.076167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.329 [2024-12-10 00:15:11.076174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.329 [2024-12-10 00:15:11.076182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.329 [2024-12-10 00:15:11.076198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.329 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.086110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.086170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.086185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.086192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.086198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.086214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.096140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.096196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.096212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.096219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.096225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.096241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.106160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.106219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.106233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.106241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.106247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.106263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.116241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.116300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.116314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.116321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.116327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.116343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.126257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.126315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.126330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.126337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.126344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.126359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.136237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.136294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.136308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.136316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.136322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.136338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.146282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.146337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.146351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.146362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.146368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.146383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.156308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.156366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.156381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.156388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.156395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.156410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.166339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.166397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.166412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.166419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.166426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.166441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.176374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.176426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.176440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.176447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.176454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.176469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.186424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.186485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.186499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.186507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.186513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.186531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.196423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.196482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.196497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.196505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.196512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.196527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.206420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.206481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.206495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.330 [2024-12-10 00:15:11.206502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.330 [2024-12-10 00:15:11.206509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.330 [2024-12-10 00:15:11.206524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.330 qpair failed and we were unable to recover it. 00:33:36.330 [2024-12-10 00:15:11.216496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.330 [2024-12-10 00:15:11.216550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.330 [2024-12-10 00:15:11.216565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.331 [2024-12-10 00:15:11.216572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.331 [2024-12-10 00:15:11.216578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.331 [2024-12-10 00:15:11.216594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-12-10 00:15:11.226496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.331 [2024-12-10 00:15:11.226551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.331 [2024-12-10 00:15:11.226565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.331 [2024-12-10 00:15:11.226572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.331 [2024-12-10 00:15:11.226579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.331 [2024-12-10 00:15:11.226594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-12-10 00:15:11.236534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.331 [2024-12-10 00:15:11.236593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.331 [2024-12-10 00:15:11.236607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.331 [2024-12-10 00:15:11.236615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.331 [2024-12-10 00:15:11.236621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.331 [2024-12-10 00:15:11.236636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-12-10 00:15:11.246562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.331 [2024-12-10 00:15:11.246622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.331 [2024-12-10 00:15:11.246636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.331 [2024-12-10 00:15:11.246645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.331 [2024-12-10 00:15:11.246652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.331 [2024-12-10 00:15:11.246666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.331 [2024-12-10 00:15:11.256590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.331 [2024-12-10 00:15:11.256653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.331 [2024-12-10 00:15:11.256668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.331 [2024-12-10 00:15:11.256675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.331 [2024-12-10 00:15:11.256682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.331 [2024-12-10 00:15:11.256697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.331 qpair failed and we were unable to recover it. 00:33:36.591 [2024-12-10 00:15:11.266650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.591 [2024-12-10 00:15:11.266705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.591 [2024-12-10 00:15:11.266719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.591 [2024-12-10 00:15:11.266727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.591 [2024-12-10 00:15:11.266733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.591 [2024-12-10 00:15:11.266749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.591 qpair failed and we were unable to recover it. 00:33:36.591 [2024-12-10 00:15:11.276669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.591 [2024-12-10 00:15:11.276732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.591 [2024-12-10 00:15:11.276750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.591 [2024-12-10 00:15:11.276759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.591 [2024-12-10 00:15:11.276765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.591 [2024-12-10 00:15:11.276781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.591 qpair failed and we were unable to recover it. 00:33:36.591 [2024-12-10 00:15:11.286656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.591 [2024-12-10 00:15:11.286713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.591 [2024-12-10 00:15:11.286727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.591 [2024-12-10 00:15:11.286735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.591 [2024-12-10 00:15:11.286741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.591 [2024-12-10 00:15:11.286758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.591 qpair failed and we were unable to recover it. 00:33:36.591 [2024-12-10 00:15:11.296705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.591 [2024-12-10 00:15:11.296759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.591 [2024-12-10 00:15:11.296774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.591 [2024-12-10 00:15:11.296781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.591 [2024-12-10 00:15:11.296788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.591 [2024-12-10 00:15:11.296804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.591 qpair failed and we were unable to recover it. 00:33:36.591 [2024-12-10 00:15:11.306741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.591 [2024-12-10 00:15:11.306800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.591 [2024-12-10 00:15:11.306814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.591 [2024-12-10 00:15:11.306822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.591 [2024-12-10 00:15:11.306828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.591 [2024-12-10 00:15:11.306844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.591 qpair failed and we were unable to recover it. 00:33:36.591 [2024-12-10 00:15:11.316791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.591 [2024-12-10 00:15:11.316849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.591 [2024-12-10 00:15:11.316863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.591 [2024-12-10 00:15:11.316871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.591 [2024-12-10 00:15:11.316880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.591 [2024-12-10 00:15:11.316895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.591 qpair failed and we were unable to recover it. 00:33:36.591 [2024-12-10 00:15:11.326814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.591 [2024-12-10 00:15:11.326874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.591 [2024-12-10 00:15:11.326888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.591 [2024-12-10 00:15:11.326895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.591 [2024-12-10 00:15:11.326903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.591 [2024-12-10 00:15:11.326918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.591 qpair failed and we were unable to recover it. 00:33:36.591 [2024-12-10 00:15:11.336828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.591 [2024-12-10 00:15:11.336882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.591 [2024-12-10 00:15:11.336896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.591 [2024-12-10 00:15:11.336903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.591 [2024-12-10 00:15:11.336910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.591 [2024-12-10 00:15:11.336925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.591 qpair failed and we were unable to recover it. 00:33:36.591 [2024-12-10 00:15:11.346892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.591 [2024-12-10 00:15:11.346948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.591 [2024-12-10 00:15:11.346963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.591 [2024-12-10 00:15:11.346970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.591 [2024-12-10 00:15:11.346976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.591 [2024-12-10 00:15:11.346992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.591 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.356917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.356976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.356991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.356999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.357006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.357021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.366908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.366966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.366981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.366988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.366995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.367011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.376932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.376988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.377003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.377010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.377017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.377033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.386955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.387011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.387026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.387033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.387040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.387055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.396998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.397056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.397070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.397077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.397084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.397099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.407073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.407127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.407145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.407152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.407163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.407180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.417108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.417210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.417226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.417233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.417240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.417257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.426996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.427060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.427075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.427082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.427089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.427105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.437106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.437173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.437188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.437196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.437202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.437218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.447125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.447184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.447198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.447206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.447215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.447231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.457163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.457215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.457230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.457238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.457244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.457259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.467234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.467332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.467346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.467353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.467359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.592 [2024-12-10 00:15:11.467375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.592 qpair failed and we were unable to recover it. 00:33:36.592 [2024-12-10 00:15:11.477234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.592 [2024-12-10 00:15:11.477293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.592 [2024-12-10 00:15:11.477308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.592 [2024-12-10 00:15:11.477315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.592 [2024-12-10 00:15:11.477322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.593 [2024-12-10 00:15:11.477337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.593 qpair failed and we were unable to recover it. 00:33:36.593 [2024-12-10 00:15:11.487246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.593 [2024-12-10 00:15:11.487313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.593 [2024-12-10 00:15:11.487328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.593 [2024-12-10 00:15:11.487336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.593 [2024-12-10 00:15:11.487342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.593 [2024-12-10 00:15:11.487357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.593 qpair failed and we were unable to recover it. 00:33:36.593 [2024-12-10 00:15:11.497199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.593 [2024-12-10 00:15:11.497257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.593 [2024-12-10 00:15:11.497272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.593 [2024-12-10 00:15:11.497279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.593 [2024-12-10 00:15:11.497286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.593 [2024-12-10 00:15:11.497301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.593 qpair failed and we were unable to recover it. 00:33:36.593 [2024-12-10 00:15:11.507302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.593 [2024-12-10 00:15:11.507355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.593 [2024-12-10 00:15:11.507369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.593 [2024-12-10 00:15:11.507376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.593 [2024-12-10 00:15:11.507383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.593 [2024-12-10 00:15:11.507399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.593 qpair failed and we were unable to recover it. 00:33:36.593 [2024-12-10 00:15:11.517326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.593 [2024-12-10 00:15:11.517409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.593 [2024-12-10 00:15:11.517424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.593 [2024-12-10 00:15:11.517431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.593 [2024-12-10 00:15:11.517437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.593 [2024-12-10 00:15:11.517453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.593 qpair failed and we were unable to recover it. 00:33:36.852 [2024-12-10 00:15:11.527382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.852 [2024-12-10 00:15:11.527445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.852 [2024-12-10 00:15:11.527459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.852 [2024-12-10 00:15:11.527466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.852 [2024-12-10 00:15:11.527473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.852 [2024-12-10 00:15:11.527488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.852 qpair failed and we were unable to recover it. 00:33:36.852 [2024-12-10 00:15:11.537396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.852 [2024-12-10 00:15:11.537461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.852 [2024-12-10 00:15:11.537479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.852 [2024-12-10 00:15:11.537487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.852 [2024-12-10 00:15:11.537493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.852 [2024-12-10 00:15:11.537510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.547418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.547477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.547492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.547500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.547506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.547521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.557385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.557446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.557461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.557468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.557474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.557490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.567398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.567460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.567474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.567482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.567488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.567504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.577559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.577635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.577650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.577661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.577668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.577684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.587450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.587508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.587522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.587531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.587537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.587552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.597482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.597537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.597552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.597559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.597565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.597580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.607498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.607567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.607581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.607588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.607594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.607610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.617532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.617584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.617598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.617605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.617611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.617627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.627605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.627708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.627722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.627729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.627735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.627751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.637599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.637655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.637669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.637677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.637683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.637699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.647663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.647718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.647733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.647740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.647746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.647762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.657655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.657730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.657745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.657753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.657759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.657774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.853 qpair failed and we were unable to recover it. 00:33:36.853 [2024-12-10 00:15:11.667683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.853 [2024-12-10 00:15:11.667734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.853 [2024-12-10 00:15:11.667748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.853 [2024-12-10 00:15:11.667755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.853 [2024-12-10 00:15:11.667761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.853 [2024-12-10 00:15:11.667778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:36.854 [2024-12-10 00:15:11.677730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.854 [2024-12-10 00:15:11.677787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.854 [2024-12-10 00:15:11.677802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.854 [2024-12-10 00:15:11.677809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.854 [2024-12-10 00:15:11.677816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.854 [2024-12-10 00:15:11.677832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:36.854 [2024-12-10 00:15:11.687774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.854 [2024-12-10 00:15:11.687827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.854 [2024-12-10 00:15:11.687841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.854 [2024-12-10 00:15:11.687849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.854 [2024-12-10 00:15:11.687855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.854 [2024-12-10 00:15:11.687871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:36.854 [2024-12-10 00:15:11.697778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.854 [2024-12-10 00:15:11.697832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.854 [2024-12-10 00:15:11.697846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.854 [2024-12-10 00:15:11.697853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.854 [2024-12-10 00:15:11.697860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.854 [2024-12-10 00:15:11.697876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:36.854 [2024-12-10 00:15:11.707801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.854 [2024-12-10 00:15:11.707858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.854 [2024-12-10 00:15:11.707873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.854 [2024-12-10 00:15:11.707883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.854 [2024-12-10 00:15:11.707889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.854 [2024-12-10 00:15:11.707906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:36.854 [2024-12-10 00:15:11.717837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.854 [2024-12-10 00:15:11.717898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.854 [2024-12-10 00:15:11.717913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.854 [2024-12-10 00:15:11.717920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.854 [2024-12-10 00:15:11.717927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.854 [2024-12-10 00:15:11.717942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:36.854 [2024-12-10 00:15:11.727947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.854 [2024-12-10 00:15:11.728013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.854 [2024-12-10 00:15:11.728027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.854 [2024-12-10 00:15:11.728034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.854 [2024-12-10 00:15:11.728041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.854 [2024-12-10 00:15:11.728057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:36.854 [2024-12-10 00:15:11.738022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.854 [2024-12-10 00:15:11.738125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.854 [2024-12-10 00:15:11.738140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.854 [2024-12-10 00:15:11.738147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.854 [2024-12-10 00:15:11.738154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.854 [2024-12-10 00:15:11.738176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:36.854 [2024-12-10 00:15:11.748013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.854 [2024-12-10 00:15:11.748105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.854 [2024-12-10 00:15:11.748119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.854 [2024-12-10 00:15:11.748127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.854 [2024-12-10 00:15:11.748133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.854 [2024-12-10 00:15:11.748152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:36.854 [2024-12-10 00:15:11.757999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.854 [2024-12-10 00:15:11.758064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.854 [2024-12-10 00:15:11.758079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.854 [2024-12-10 00:15:11.758087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.854 [2024-12-10 00:15:11.758094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.854 [2024-12-10 00:15:11.758109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:36.854 [2024-12-10 00:15:11.768025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.854 [2024-12-10 00:15:11.768093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.854 [2024-12-10 00:15:11.768107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.854 [2024-12-10 00:15:11.768114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.854 [2024-12-10 00:15:11.768120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.854 [2024-12-10 00:15:11.768136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:36.854 [2024-12-10 00:15:11.778035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:36.854 [2024-12-10 00:15:11.778098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:36.854 [2024-12-10 00:15:11.778113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:36.854 [2024-12-10 00:15:11.778120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:36.854 [2024-12-10 00:15:11.778127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:36.854 [2024-12-10 00:15:11.778143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:36.854 qpair failed and we were unable to recover it. 00:33:37.113 [2024-12-10 00:15:11.788123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.114 [2024-12-10 00:15:11.788186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.114 [2024-12-10 00:15:11.788200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.114 [2024-12-10 00:15:11.788207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.114 [2024-12-10 00:15:11.788213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:37.114 [2024-12-10 00:15:11.788228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:37.114 qpair failed and we were unable to recover it. 00:33:37.114 [2024-12-10 00:15:11.798149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.114 [2024-12-10 00:15:11.798217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.114 [2024-12-10 00:15:11.798232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.114 [2024-12-10 00:15:11.798238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.114 [2024-12-10 00:15:11.798245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0294000b90 00:33:37.114 [2024-12-10 00:15:11.798261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:33:37.114 qpair failed and we were unable to recover it. 00:33:37.114 [2024-12-10 00:15:11.808172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.114 [2024-12-10 00:15:11.808268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.114 [2024-12-10 00:15:11.808320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.114 [2024-12-10 00:15:11.808343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.114 [2024-12-10 00:15:11.808361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f029c000b90 00:33:37.114 [2024-12-10 00:15:11.808409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.114 qpair failed and we were unable to recover it. 00:33:37.114 [2024-12-10 00:15:11.818269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.114 [2024-12-10 00:15:11.818362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.114 [2024-12-10 00:15:11.818387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.114 [2024-12-10 00:15:11.818400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.114 [2024-12-10 00:15:11.818412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f029c000b90 00:33:37.114 [2024-12-10 00:15:11.818440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:33:37.114 qpair failed and we were unable to recover it. 00:33:37.114 [2024-12-10 00:15:11.828296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.114 [2024-12-10 00:15:11.828408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.114 [2024-12-10 00:15:11.828462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.114 [2024-12-10 00:15:11.828486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.114 [2024-12-10 00:15:11.828506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0290000b90 00:33:37.114 [2024-12-10 00:15:11.828552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:37.114 qpair failed and we were unable to recover it. 00:33:37.114 [2024-12-10 00:15:11.838266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.114 [2024-12-10 00:15:11.838347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.114 [2024-12-10 00:15:11.838377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.114 [2024-12-10 00:15:11.838391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.114 [2024-12-10 00:15:11.838402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f0290000b90 00:33:37.114 [2024-12-10 00:15:11.838432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:33:37.114 qpair failed and we were unable to recover it. 00:33:37.114 [2024-12-10 00:15:11.838547] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:33:37.114 A controller has encountered a failure and is being reset. 00:33:37.114 [2024-12-10 00:15:11.848359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.114 [2024-12-10 00:15:11.848494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.114 [2024-12-10 00:15:11.848552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.114 [2024-12-10 00:15:11.848577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.114 [2024-12-10 00:15:11.848596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24c9be0 00:33:37.114 [2024-12-10 00:15:11.848642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:37.114 qpair failed and we were unable to recover it. 00:33:37.114 [2024-12-10 00:15:11.858346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:37.114 [2024-12-10 00:15:11.858432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:37.114 [2024-12-10 00:15:11.858457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:37.114 [2024-12-10 00:15:11.858470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:37.114 [2024-12-10 00:15:11.858481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24c9be0 00:33:37.114 [2024-12-10 00:15:11.858508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:33:37.114 qpair failed and we were unable to recover it. 00:33:37.114 Controller properly reset. 00:33:37.114 Initializing NVMe Controllers 00:33:37.114 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:37.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:37.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:37.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:37.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:37.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:37.114 Initialization complete. Launching workers. 00:33:37.114 Starting thread on core 1 00:33:37.114 Starting thread on core 2 00:33:37.114 Starting thread on core 3 00:33:37.114 Starting thread on core 0 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:37.114 00:33:37.114 real 0m10.785s 00:33:37.114 user 0m19.429s 00:33:37.114 sys 0m4.666s 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.114 ************************************ 00:33:37.114 END TEST nvmf_target_disconnect_tc2 00:33:37.114 ************************************ 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:37.114 rmmod nvme_tcp 00:33:37.114 rmmod nvme_fabrics 00:33:37.114 rmmod nvme_keyring 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 531488 ']' 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 531488 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 531488 ']' 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 531488 00:33:37.114 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:33:37.115 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:37.115 00:15:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531488 00:33:37.115 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:33:37.115 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:33:37.115 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531488' 00:33:37.115 killing process with pid 531488 00:33:37.115 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 531488 00:33:37.115 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 531488 00:33:37.373 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:37.373 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:37.374 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:37.374 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:33:37.374 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:33:37.374 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:37.374 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:33:37.374 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:37.374 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:37.374 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:37.374 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:37.374 00:15:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.912 00:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:39.912 00:33:39.912 real 0m19.470s 00:33:39.912 user 0m46.989s 00:33:39.912 sys 0m9.579s 00:33:39.912 00:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.912 00:15:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:39.912 ************************************ 00:33:39.912 END TEST nvmf_target_disconnect 00:33:39.912 ************************************ 00:33:39.912 00:15:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:39.912 00:33:39.912 real 5m52.839s 00:33:39.912 user 10m32.514s 00:33:39.912 sys 1m57.562s 00:33:39.912 00:15:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.912 00:15:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:39.912 ************************************ 00:33:39.912 END TEST nvmf_host 00:33:39.912 ************************************ 00:33:39.912 00:15:14 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:33:39.912 00:15:14 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:33:39.912 00:15:14 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:39.912 00:15:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:39.912 00:15:14 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.912 00:15:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.912 ************************************ 00:33:39.912 START TEST nvmf_target_core_interrupt_mode 00:33:39.912 ************************************ 00:33:39.912 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:33:39.912 * Looking for test storage... 00:33:39.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:33:39.912 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:39.912 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:33:39.912 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:39.912 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:39.912 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.912 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.912 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.912 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.912 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.913 --rc genhtml_branch_coverage=1 00:33:39.913 --rc genhtml_function_coverage=1 00:33:39.913 --rc genhtml_legend=1 00:33:39.913 --rc geninfo_all_blocks=1 00:33:39.913 --rc geninfo_unexecuted_blocks=1 00:33:39.913 00:33:39.913 ' 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.913 --rc genhtml_branch_coverage=1 00:33:39.913 --rc genhtml_function_coverage=1 00:33:39.913 --rc genhtml_legend=1 00:33:39.913 --rc geninfo_all_blocks=1 00:33:39.913 --rc geninfo_unexecuted_blocks=1 00:33:39.913 00:33:39.913 ' 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.913 --rc genhtml_branch_coverage=1 00:33:39.913 --rc genhtml_function_coverage=1 00:33:39.913 --rc genhtml_legend=1 00:33:39.913 --rc geninfo_all_blocks=1 00:33:39.913 --rc geninfo_unexecuted_blocks=1 00:33:39.913 00:33:39.913 ' 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:39.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.913 --rc genhtml_branch_coverage=1 00:33:39.913 --rc genhtml_function_coverage=1 00:33:39.913 --rc genhtml_legend=1 00:33:39.913 --rc geninfo_all_blocks=1 00:33:39.913 --rc geninfo_unexecuted_blocks=1 00:33:39.913 00:33:39.913 ' 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.913 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:39.914 ************************************ 00:33:39.914 START TEST nvmf_abort 00:33:39.914 ************************************ 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:33:39.914 * Looking for test storage... 00:33:39.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:39.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.914 --rc genhtml_branch_coverage=1 00:33:39.914 --rc genhtml_function_coverage=1 00:33:39.914 --rc genhtml_legend=1 00:33:39.914 --rc geninfo_all_blocks=1 00:33:39.914 --rc geninfo_unexecuted_blocks=1 00:33:39.914 00:33:39.914 ' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:39.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.914 --rc genhtml_branch_coverage=1 00:33:39.914 --rc genhtml_function_coverage=1 00:33:39.914 --rc genhtml_legend=1 00:33:39.914 --rc geninfo_all_blocks=1 00:33:39.914 --rc geninfo_unexecuted_blocks=1 00:33:39.914 00:33:39.914 ' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:39.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.914 --rc genhtml_branch_coverage=1 00:33:39.914 --rc genhtml_function_coverage=1 00:33:39.914 --rc genhtml_legend=1 00:33:39.914 --rc geninfo_all_blocks=1 00:33:39.914 --rc geninfo_unexecuted_blocks=1 00:33:39.914 00:33:39.914 ' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:39.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.914 --rc genhtml_branch_coverage=1 00:33:39.914 --rc genhtml_function_coverage=1 00:33:39.914 --rc genhtml_legend=1 00:33:39.914 --rc geninfo_all_blocks=1 00:33:39.914 --rc geninfo_unexecuted_blocks=1 00:33:39.914 00:33:39.914 ' 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.914 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:33:39.915 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:33:40.174 00:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.751 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:46.752 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:46.752 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:46.752 Found net devices under 0000:86:00.0: cvl_0_0 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:46.752 Found net devices under 0000:86:00.1: cvl_0_1 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:33:46.752 00:33:46.752 --- 10.0.0.2 ping statistics --- 00:33:46.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.752 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:33:46.752 00:33:46.752 --- 10.0.0.1 ping statistics --- 00:33:46.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.752 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=536480 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 536480 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 536480 ']' 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.752 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.753 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.753 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.753 00:15:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:46.753 [2024-12-10 00:15:20.782459] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:46.753 [2024-12-10 00:15:20.783354] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:33:46.753 [2024-12-10 00:15:20.783387] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.753 [2024-12-10 00:15:20.863348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:46.753 [2024-12-10 00:15:20.905967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.753 [2024-12-10 00:15:20.906000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.753 [2024-12-10 00:15:20.906008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.753 [2024-12-10 00:15:20.906016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.753 [2024-12-10 00:15:20.906021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.753 [2024-12-10 00:15:20.907225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:46.753 [2024-12-10 00:15:20.907327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.753 [2024-12-10 00:15:20.907329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:46.753 [2024-12-10 00:15:20.976003] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:46.753 [2024-12-10 00:15:20.976742] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:46.753 [2024-12-10 00:15:20.976820] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:46.753 [2024-12-10 00:15:20.976975] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:46.753 [2024-12-10 00:15:21.668109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.753 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:47.011 Malloc0 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:47.011 Delay0 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:47.011 [2024-12-10 00:15:21.768092] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.011 00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:33:47.011 [2024-12-10 00:15:21.894487] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:49.543 Initializing NVMe Controllers 00:33:49.543 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:49.543 controller IO queue size 128 less than required 00:33:49.543 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:33:49.543 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:33:49.543 Initialization complete. Launching workers. 00:33:49.543 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36529 00:33:49.543 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36590, failed to submit 66 00:33:49.543 success 36529, unsuccessful 61, failed 0 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:49.543 rmmod nvme_tcp 00:33:49.543 rmmod nvme_fabrics 00:33:49.543 rmmod nvme_keyring 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 536480 ']' 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 536480 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 536480 ']' 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 536480 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 536480 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 536480' 00:33:49.543 killing process with pid 536480 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 536480 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 536480 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.543 00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.075 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:52.075 00:33:52.075 real 0m11.792s 00:33:52.075 user 0m10.733s 00:33:52.075 sys 0m5.765s 00:33:52.075 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:52.075 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:33:52.075 ************************************ 00:33:52.075 END TEST nvmf_abort 00:33:52.075 ************************************ 00:33:52.075 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:52.075 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:52.075 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:52.075 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:52.075 ************************************ 00:33:52.075 START TEST nvmf_ns_hotplug_stress 00:33:52.075 ************************************ 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:33:52.076 * Looking for test storage... 00:33:52.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:52.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.076 --rc genhtml_branch_coverage=1 00:33:52.076 --rc genhtml_function_coverage=1 00:33:52.076 --rc genhtml_legend=1 00:33:52.076 --rc geninfo_all_blocks=1 00:33:52.076 --rc geninfo_unexecuted_blocks=1 00:33:52.076 00:33:52.076 ' 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:52.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.076 --rc genhtml_branch_coverage=1 00:33:52.076 --rc genhtml_function_coverage=1 00:33:52.076 --rc genhtml_legend=1 00:33:52.076 --rc geninfo_all_blocks=1 00:33:52.076 --rc geninfo_unexecuted_blocks=1 00:33:52.076 00:33:52.076 ' 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:52.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.076 --rc genhtml_branch_coverage=1 00:33:52.076 --rc genhtml_function_coverage=1 00:33:52.076 --rc genhtml_legend=1 00:33:52.076 --rc geninfo_all_blocks=1 00:33:52.076 --rc geninfo_unexecuted_blocks=1 00:33:52.076 00:33:52.076 ' 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:52.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.076 --rc genhtml_branch_coverage=1 00:33:52.076 --rc genhtml_function_coverage=1 00:33:52.076 --rc genhtml_legend=1 00:33:52.076 --rc geninfo_all_blocks=1 00:33:52.076 --rc geninfo_unexecuted_blocks=1 00:33:52.076 00:33:52.076 ' 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.076 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:33:52.077 00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.348 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:57.607 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:57.607 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:57.607 Found net devices under 0000:86:00.0: cvl_0_0 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:57.607 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:57.608 Found net devices under 0000:86:00.1: cvl_0_1 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:57.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:57.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:33:57.608 00:33:57.608 --- 10.0.0.2 ping statistics --- 00:33:57.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.608 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:33:57.608 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:57.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:57.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:33:57.868 00:33:57.868 --- 10.0.0.1 ping statistics --- 00:33:57.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.868 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=540490 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 540490 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 540490 ']' 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:57.868 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:57.868 [2024-12-10 00:15:32.645430] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:57.868 [2024-12-10 00:15:32.646383] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:33:57.868 [2024-12-10 00:15:32.646422] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:57.868 [2024-12-10 00:15:32.726250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:57.868 [2024-12-10 00:15:32.766928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:57.868 [2024-12-10 00:15:32.766965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:57.868 [2024-12-10 00:15:32.766972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:57.868 [2024-12-10 00:15:32.766979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:57.868 [2024-12-10 00:15:32.766984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:57.868 [2024-12-10 00:15:32.768306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:57.868 [2024-12-10 00:15:32.768415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.868 [2024-12-10 00:15:32.768416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:58.127 [2024-12-10 00:15:32.836977] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:58.127 [2024-12-10 00:15:32.837692] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:58.127 [2024-12-10 00:15:32.837916] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:58.127 [2024-12-10 00:15:32.838011] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:58.127 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:58.127 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:33:58.127 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:58.127 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:58.127 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:33:58.127 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:58.127 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:33:58.127 00:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:58.385 [2024-12-10 00:15:33.073178] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:58.385 00:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:58.385 00:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:58.644 [2024-12-10 00:15:33.477574] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.644 00:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:58.903 00:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:33:59.161 Malloc0 00:33:59.161 00:15:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:59.420 Delay0 00:33:59.421 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:33:59.421 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:33:59.681 NULL1 00:33:59.681 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:33:59.940 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=540756 00:33:59.940 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:33:59.940 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:33:59.940 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:00.198 00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:00.456 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:34:00.456 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:34:00.456 true 00:34:00.456 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:00.456 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:00.714 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:00.972 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:34:00.972 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:34:01.231 true 00:34:01.231 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:01.231 00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:01.231 Read completed with error (sct=0, sc=11) 00:34:01.488 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:01.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:01.488 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:34:01.488 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:34:01.746 true 00:34:01.746 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:01.746 00:15:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:02.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:02.680 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:02.680 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:34:02.680 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:34:02.938 true 00:34:02.938 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:02.938 00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:03.197 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:03.455 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:34:03.455 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:34:03.455 true 00:34:03.716 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:03.716 00:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:04.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:04.651 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:04.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:04.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:04.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:04.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:04.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:04.909 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:04.909 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:34:04.909 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:34:05.167 true 00:34:05.167 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:05.167 00:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:06.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:06.102 00:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:06.102 00:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:34:06.102 00:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:34:06.360 true 00:34:06.360 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:06.360 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:06.618 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:06.876 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:34:06.876 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:34:06.876 true 00:34:06.877 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:06.877 00:15:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:08.261 00:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:08.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:08.519 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:34:08.519 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:34:08.519 true 00:34:08.519 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:08.520 00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:09.454 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:09.712 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:34:09.712 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:34:09.712 true 00:34:09.712 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:09.712 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:09.971 00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:10.229 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:34:10.229 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:34:10.487 true 00:34:10.487 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:10.487 00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:11.420 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:11.420 00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:11.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:11.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:11.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:11.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:11.678 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:11.678 00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:34:11.678 00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:34:11.936 true 00:34:11.936 00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:11.936 00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:12.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:12.869 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:12.869 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:34:12.869 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:34:13.127 true 00:34:13.127 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:13.127 00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:13.385 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:13.643 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:34:13.643 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:34:13.903 true 00:34:13.903 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:13.903 00:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:14.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:14.838 00:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:14.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:14.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:15.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:15.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:15.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:15.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:15.097 00:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:34:15.097 00:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:34:15.356 true 00:34:15.356 00:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:15.356 00:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:16.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:16.298 00:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:16.298 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:34:16.298 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:34:16.559 true 00:34:16.559 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:16.559 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:16.817 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:17.073 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:34:17.073 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:34:17.074 true 00:34:17.074 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:17.074 00:15:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:18.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.447 00:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:18.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.447 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:18.447 00:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:34:18.447 00:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:34:18.706 true 00:34:18.706 00:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:18.706 00:15:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:19.640 00:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:19.640 00:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:34:19.640 00:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:34:19.898 true 00:34:19.898 00:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:19.898 00:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:20.158 00:15:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:20.416 00:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:34:20.416 00:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:34:20.416 true 00:34:20.416 00:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:20.416 00:15:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:21.789 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:21.789 00:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:21.789 00:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:34:21.789 00:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:34:22.047 true 00:34:22.047 00:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:22.047 00:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:22.047 00:15:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:22.305 00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:34:22.305 00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:34:22.563 true 00:34:22.563 00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:22.563 00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:23.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:23.498 00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:23.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:23.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:23.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:23.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:23.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:23.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:23.757 00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:34:23.757 00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:34:24.015 true 00:34:24.015 00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:24.015 00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:24.949 00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:24.949 00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:34:24.949 00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:34:25.207 true 00:34:25.207 00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:25.207 00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:25.465 00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:25.723 00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:34:25.723 00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:34:25.981 true 00:34:25.981 00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:25.981 00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:26.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:26.921 00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:27.179 00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:34:27.179 00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:34:27.179 true 00:34:27.179 00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:27.179 00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:27.436 00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:27.694 00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:34:27.694 00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:34:27.953 true 00:34:27.953 00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:27.953 00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:28.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.148 00:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:29.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:29.148 00:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:34:29.148 00:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:34:29.406 true 00:34:29.406 00:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:29.406 00:16:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:30.340 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:30.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:34:30.340 Initializing NVMe Controllers 00:34:30.340 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:30.340 Controller IO queue size 128, less than required. 00:34:30.340 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:30.340 Controller IO queue size 128, less than required. 00:34:30.340 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:30.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:30.340 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:34:30.340 Initialization complete. Launching workers. 00:34:30.340 ======================================================== 00:34:30.340 Latency(us) 00:34:30.340 Device Information : IOPS MiB/s Average min max 00:34:30.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2073.47 1.01 40697.46 1987.08 1054664.78 00:34:30.340 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16990.77 8.30 7510.61 1321.80 455854.97 00:34:30.340 ======================================================== 00:34:30.340 Total : 19064.25 9.31 11120.09 1321.80 1054664.78 00:34:30.340 00:34:30.602 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:34:30.602 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:34:30.602 true 00:34:30.602 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 540756 00:34:30.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (540756) - No such process 00:34:30.602 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 540756 00:34:30.602 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:30.860 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:31.119 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:34:31.119 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:34:31.119 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:34:31.119 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:31.119 00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:34:31.378 null0 00:34:31.378 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:31.378 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:31.378 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:34:31.378 null1 00:34:31.378 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:31.378 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:31.378 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:34:31.637 null2 00:34:31.637 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:31.637 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:31.637 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:34:31.896 null3 00:34:31.896 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:31.896 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:31.896 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:34:31.896 null4 00:34:31.896 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:31.896 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:31.896 00:16:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:34:32.155 null5 00:34:32.155 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:32.155 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:32.155 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:34:32.414 null6 00:34:32.414 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:32.414 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:32.414 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:34:32.674 null7 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.674 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 546088 546089 546091 546094 546095 546097 546099 546100 00:34:32.675 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:34:32.675 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:32.675 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:34:32.675 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:34:32.675 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.675 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:32.933 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:32.933 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:32.933 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:32.933 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:32.933 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:32.933 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:32.933 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:32.933 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:32.933 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:32.933 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.933 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:32.934 00:16:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:33.192 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:33.192 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:33.192 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:33.192 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:33.192 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:33.192 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:33.192 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:33.192 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:33.450 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.450 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.450 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:33.450 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.450 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.450 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:33.450 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.450 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.450 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:33.450 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.451 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:33.710 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:33.710 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:33.710 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:33.710 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:33.710 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:33.710 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:33.710 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:33.710 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:33.968 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:34.227 00:16:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.227 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:34.486 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:34.486 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:34.486 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:34.486 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:34.486 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:34.486 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:34.486 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:34.486 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:34.745 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:34.746 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.004 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:35.262 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.262 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.262 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.262 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:35.262 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.262 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:35.262 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.262 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.262 00:16:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:35.262 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:35.262 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:35.262 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:35.262 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:35.262 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:35.263 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:35.263 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:35.263 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:35.521 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:35.780 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:35.780 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:35.780 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:35.780 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:35.780 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:35.780 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:35.780 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:35.780 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.040 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:36.298 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:36.299 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:36.299 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:36.299 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:36.299 00:16:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.299 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:34:36.558 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:34:36.558 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:34:36.558 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:34:36.558 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:34:36.558 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:34:36.558 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:34:36.558 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:36.558 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:34:36.816 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:36.817 rmmod nvme_tcp 00:34:36.817 rmmod nvme_fabrics 00:34:36.817 rmmod nvme_keyring 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 540490 ']' 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 540490 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 540490 ']' 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 540490 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 540490 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 540490' 00:34:36.817 killing process with pid 540490 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 540490 00:34:36.817 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 540490 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:37.075 00:16:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.618 00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:39.618 00:34:39.618 real 0m47.469s 00:34:39.618 user 2m59.266s 00:34:39.618 sys 0m20.300s 00:34:39.618 00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:39.618 00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:34:39.618 ************************************ 00:34:39.618 END TEST nvmf_ns_hotplug_stress 00:34:39.618 ************************************ 00:34:39.618 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:39.618 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:39.618 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:39.618 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:39.618 ************************************ 00:34:39.618 START TEST nvmf_delete_subsystem 00:34:39.618 ************************************ 00:34:39.618 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:34:39.618 * Looking for test storage... 00:34:39.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:34:39.618 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:39.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.619 --rc genhtml_branch_coverage=1 00:34:39.619 --rc genhtml_function_coverage=1 00:34:39.619 --rc genhtml_legend=1 00:34:39.619 --rc geninfo_all_blocks=1 00:34:39.619 --rc geninfo_unexecuted_blocks=1 00:34:39.619 00:34:39.619 ' 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:39.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.619 --rc genhtml_branch_coverage=1 00:34:39.619 --rc genhtml_function_coverage=1 00:34:39.619 --rc genhtml_legend=1 00:34:39.619 --rc geninfo_all_blocks=1 00:34:39.619 --rc geninfo_unexecuted_blocks=1 00:34:39.619 00:34:39.619 ' 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:39.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.619 --rc genhtml_branch_coverage=1 00:34:39.619 --rc genhtml_function_coverage=1 00:34:39.619 --rc genhtml_legend=1 00:34:39.619 --rc geninfo_all_blocks=1 00:34:39.619 --rc geninfo_unexecuted_blocks=1 00:34:39.619 00:34:39.619 ' 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:39.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:39.619 --rc genhtml_branch_coverage=1 00:34:39.619 --rc genhtml_function_coverage=1 00:34:39.619 --rc genhtml_legend=1 00:34:39.619 --rc geninfo_all_blocks=1 00:34:39.619 --rc geninfo_unexecuted_blocks=1 00:34:39.619 00:34:39.619 ' 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:34:39.619 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:34:39.620 00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:34:46.187 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:46.188 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:46.188 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:46.188 Found net devices under 0000:86:00.0: cvl_0_0 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:46.188 Found net devices under 0000:86:00.1: cvl_0_1 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:46.188 00:16:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:46.188 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:46.188 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:46.188 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:46.188 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:46.188 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:46.188 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:46.188 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:46.188 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:46.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:46.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:34:46.188 00:34:46.188 --- 10.0.0.2 ping statistics --- 00:34:46.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.188 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:46.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:46.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:34:46.189 00:34:46.189 --- 10.0.0.1 ping statistics --- 00:34:46.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:46.189 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=550454 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 550454 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 550454 ']' 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:46.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:46.189 [2024-12-10 00:16:20.309704] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:46.189 [2024-12-10 00:16:20.310619] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:34:46.189 [2024-12-10 00:16:20.310652] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:46.189 [2024-12-10 00:16:20.390354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:46.189 [2024-12-10 00:16:20.430851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:46.189 [2024-12-10 00:16:20.430888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:46.189 [2024-12-10 00:16:20.430898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:46.189 [2024-12-10 00:16:20.430904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:46.189 [2024-12-10 00:16:20.430909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:46.189 [2024-12-10 00:16:20.432037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:46.189 [2024-12-10 00:16:20.432039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.189 [2024-12-10 00:16:20.500745] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:46.189 [2024-12-10 00:16:20.501244] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:46.189 [2024-12-10 00:16:20.501479] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:46.189 [2024-12-10 00:16:20.568827] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:46.189 [2024-12-10 00:16:20.597191] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:46.189 NULL1 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:46.189 Delay0 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=550482 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:34:46.189 00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:46.189 [2024-12-10 00:16:20.708924] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:48.091 00:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:48.091 00:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.091 00:16:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 starting I/O failed: -6 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 starting I/O failed: -6 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 starting I/O failed: -6 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 starting I/O failed: -6 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 starting I/O failed: -6 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 starting I/O failed: -6 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 starting I/O failed: -6 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 starting I/O failed: -6 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 starting I/O failed: -6 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 starting I/O failed: -6 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 [2024-12-10 00:16:22.804554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae4a0 is same with the state(6) to be set 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Write completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.091 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 [2024-12-10 00:16:22.805086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae860 is same with the state(6) to be set 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 starting I/O failed: -6 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 starting I/O failed: -6 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 starting I/O failed: -6 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 starting I/O failed: -6 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 starting I/O failed: -6 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 starting I/O failed: -6 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 starting I/O failed: -6 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 starting I/O failed: -6 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 starting I/O failed: -6 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 starting I/O failed: -6 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 [2024-12-10 00:16:22.805482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd08c00d4d0 is same with the state(6) to be set 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Write completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:48.092 Read completed with error (sct=0, sc=8) 00:34:49.028 [2024-12-10 00:16:23.762943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf9b0 is same with the state(6) to be set 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 [2024-12-10 00:16:23.807334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd08c00d800 is same with the state(6) to be set 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 [2024-12-10 00:16:23.807483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd08c00d020 is same with the state(6) to be set 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Write completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.028 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Write completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Write completed with error (sct=0, sc=8) 00:34:49.029 [2024-12-10 00:16:23.808088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae2c0 is same with the state(6) to be set 00:34:49.029 Write completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Write completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Write completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Write completed with error (sct=0, sc=8) 00:34:49.029 Write completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Write completed with error (sct=0, sc=8) 00:34:49.029 Write completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Read completed with error (sct=0, sc=8) 00:34:49.029 Write completed with error (sct=0, sc=8) 00:34:49.029 [2024-12-10 00:16:23.808772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae680 is same with the state(6) to be set 00:34:49.029 Initializing NVMe Controllers 00:34:49.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:49.029 Controller IO queue size 128, less than required. 00:34:49.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:49.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:49.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:49.029 Initialization complete. Launching workers. 00:34:49.029 ======================================================== 00:34:49.029 Latency(us) 00:34:49.029 Device Information : IOPS MiB/s Average min max 00:34:49.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.32 0.08 947839.21 542.20 2002185.31 00:34:49.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.87 0.08 1024251.30 240.97 2002188.33 00:34:49.029 ======================================================== 00:34:49.029 Total : 322.19 0.16 985279.96 240.97 2002188.33 00:34:49.029 00:34:49.029 [2024-12-10 00:16:23.809213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeaf9b0 (9): Bad file descriptor 00:34:49.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:34:49.029 00:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.029 00:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:34:49.029 00:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 550482 00:34:49.029 00:16:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 550482 00:34:49.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (550482) - No such process 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 550482 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 550482 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 550482 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:49.595 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:49.596 [2024-12-10 00:16:24.333026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=551163 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 551163 00:34:49.596 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:49.596 [2024-12-10 00:16:24.416419] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:34:50.162 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:50.162 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 551163 00:34:50.162 00:16:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:50.727 00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:50.727 00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 551163 00:34:50.727 00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:50.994 00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:50.994 00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 551163 00:34:50.994 00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:51.562 00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:51.562 00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 551163 00:34:51.562 00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:52.127 00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:52.127 00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 551163 00:34:52.127 00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:52.693 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:52.693 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 551163 00:34:52.693 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:34:52.952 Initializing NVMe Controllers 00:34:52.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:52.952 Controller IO queue size 128, less than required. 00:34:52.952 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:52.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:52.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:52.952 Initialization complete. Launching workers. 00:34:52.952 ======================================================== 00:34:52.952 Latency(us) 00:34:52.952 Device Information : IOPS MiB/s Average min max 00:34:52.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002007.80 1000136.55 1005944.07 00:34:52.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004139.32 1000195.36 1042407.41 00:34:52.952 ======================================================== 00:34:52.952 Total : 256.00 0.12 1003073.56 1000136.55 1042407.41 00:34:52.952 00:34:52.952 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:34:52.952 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 551163 00:34:52.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (551163) - No such process 00:34:52.952 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 551163 00:34:52.952 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:34:52.952 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:34:52.952 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:52.952 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:34:52.952 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:52.952 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:34:52.952 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:52.952 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:53.211 rmmod nvme_tcp 00:34:53.211 rmmod nvme_fabrics 00:34:53.211 rmmod nvme_keyring 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 550454 ']' 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 550454 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 550454 ']' 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 550454 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 550454 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:53.211 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 550454' 00:34:53.211 killing process with pid 550454 00:34:53.212 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 550454 00:34:53.212 00:16:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 550454 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:53.470 00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.373 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:55.373 00:34:55.373 real 0m16.169s 00:34:55.373 user 0m26.093s 00:34:55.373 sys 0m6.123s 00:34:55.373 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:55.373 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:34:55.373 ************************************ 00:34:55.373 END TEST nvmf_delete_subsystem 00:34:55.373 ************************************ 00:34:55.373 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:55.373 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:55.373 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:55.373 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:55.373 ************************************ 00:34:55.373 START TEST nvmf_host_management 00:34:55.373 ************************************ 00:34:55.373 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:34:55.633 * Looking for test storage... 00:34:55.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:34:55.633 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.634 --rc genhtml_branch_coverage=1 00:34:55.634 --rc genhtml_function_coverage=1 00:34:55.634 --rc genhtml_legend=1 00:34:55.634 --rc geninfo_all_blocks=1 00:34:55.634 --rc geninfo_unexecuted_blocks=1 00:34:55.634 00:34:55.634 ' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.634 --rc genhtml_branch_coverage=1 00:34:55.634 --rc genhtml_function_coverage=1 00:34:55.634 --rc genhtml_legend=1 00:34:55.634 --rc geninfo_all_blocks=1 00:34:55.634 --rc geninfo_unexecuted_blocks=1 00:34:55.634 00:34:55.634 ' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.634 --rc genhtml_branch_coverage=1 00:34:55.634 --rc genhtml_function_coverage=1 00:34:55.634 --rc genhtml_legend=1 00:34:55.634 --rc geninfo_all_blocks=1 00:34:55.634 --rc geninfo_unexecuted_blocks=1 00:34:55.634 00:34:55.634 ' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:55.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.634 --rc genhtml_branch_coverage=1 00:34:55.634 --rc genhtml_function_coverage=1 00:34:55.634 --rc genhtml_legend=1 00:34:55.634 --rc geninfo_all_blocks=1 00:34:55.634 --rc geninfo_unexecuted_blocks=1 00:34:55.634 00:34:55.634 ' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:55.634 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:34:55.635 00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:02.200 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:02.201 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:02.201 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:02.201 Found net devices under 0000:86:00.0: cvl_0_0 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:02.201 Found net devices under 0000:86:00.1: cvl_0_1 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:02.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:02.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:35:02.201 00:35:02.201 --- 10.0.0.2 ping statistics --- 00:35:02.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.201 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:02.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:02.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:35:02.201 00:35:02.201 --- 10.0.0.1 ping statistics --- 00:35:02.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.201 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=555156 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 555156 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 555156 ']' 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.201 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.202 [2024-12-10 00:16:36.404432] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:02.202 [2024-12-10 00:16:36.405394] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:35:02.202 [2024-12-10 00:16:36.405432] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:02.202 [2024-12-10 00:16:36.484740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:02.202 [2024-12-10 00:16:36.528384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:02.202 [2024-12-10 00:16:36.528421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:02.202 [2024-12-10 00:16:36.528428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:02.202 [2024-12-10 00:16:36.528434] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:02.202 [2024-12-10 00:16:36.528441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:02.202 [2024-12-10 00:16:36.529963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:02.202 [2024-12-10 00:16:36.530075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:02.202 [2024-12-10 00:16:36.530196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.202 [2024-12-10 00:16:36.530197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:02.202 [2024-12-10 00:16:36.599166] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:02.202 [2024-12-10 00:16:36.599761] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:02.202 [2024-12-10 00:16:36.600063] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:02.202 [2024-12-10 00:16:36.600155] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:02.202 [2024-12-10 00:16:36.600258] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.202 [2024-12-10 00:16:36.666873] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.202 Malloc0 00:35:02.202 [2024-12-10 00:16:36.755147] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=555217 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 555217 /var/tmp/bdevperf.sock 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 555217 ']' 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:02.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:02.202 { 00:35:02.202 "params": { 00:35:02.202 "name": "Nvme$subsystem", 00:35:02.202 "trtype": "$TEST_TRANSPORT", 00:35:02.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:02.202 "adrfam": "ipv4", 00:35:02.202 "trsvcid": "$NVMF_PORT", 00:35:02.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:02.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:02.202 "hdgst": ${hdgst:-false}, 00:35:02.202 "ddgst": ${ddgst:-false} 00:35:02.202 }, 00:35:02.202 "method": "bdev_nvme_attach_controller" 00:35:02.202 } 00:35:02.202 EOF 00:35:02.202 )") 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:35:02.202 00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:02.202 "params": { 00:35:02.202 "name": "Nvme0", 00:35:02.202 "trtype": "tcp", 00:35:02.202 "traddr": "10.0.0.2", 00:35:02.202 "adrfam": "ipv4", 00:35:02.202 "trsvcid": "4420", 00:35:02.202 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:02.202 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:02.202 "hdgst": false, 00:35:02.202 "ddgst": false 00:35:02.202 }, 00:35:02.202 "method": "bdev_nvme_attach_controller" 00:35:02.202 }' 00:35:02.202 [2024-12-10 00:16:36.854642] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:35:02.202 [2024-12-10 00:16:36.854711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555217 ] 00:35:02.202 [2024-12-10 00:16:36.931636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.202 [2024-12-10 00:16:36.973873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.461 Running I/O for 10 seconds... 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:35:02.461 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=654 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 654 -ge 100 ']' 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.722 [2024-12-10 00:16:37.562608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 [2024-12-10 00:16:37.562739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff120 is same with the state(6) to be set 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.722 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:02.722 [2024-12-10 00:16:37.569960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.722 [2024-12-10 00:16:37.569999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.722 [2024-12-10 00:16:37.570016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.722 [2024-12-10 00:16:37.570024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.722 [2024-12-10 00:16:37.570033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.722 [2024-12-10 00:16:37.570040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.722 [2024-12-10 00:16:37.570049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.722 [2024-12-10 00:16:37.570056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.722 [2024-12-10 00:16:37.570064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.722 [2024-12-10 00:16:37.570071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.722 [2024-12-10 00:16:37.570079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.722 [2024-12-10 00:16:37.570085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.722 [2024-12-10 00:16:37.570094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.722 [2024-12-10 00:16:37.570102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.722 [2024-12-10 00:16:37.570110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.723 [2024-12-10 00:16:37.570728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.723 [2024-12-10 00:16:37.570734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.570969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:02.724 [2024-12-10 00:16:37.570978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.571075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:02.724 [2024-12-10 00:16:37.571086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.571095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:02.724 [2024-12-10 00:16:37.571101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.571109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:02.724 [2024-12-10 00:16:37.571115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.571123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:02.724 [2024-12-10 00:16:37.571130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:02.724 [2024-12-10 00:16:37.571137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x137f120 is same with the state(6) to be set 00:35:02.724 [2024-12-10 00:16:37.572015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:02.724 task offset: 98304 on job bdev=Nvme0n1 fails 00:35:02.724 00:35:02.724 Latency(us) 00:35:02.724 [2024-12-09T23:16:37.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.724 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:02.724 Job: Nvme0n1 ended in about 0.40 seconds with error 00:35:02.724 Verification LBA range: start 0x0 length 0x400 00:35:02.724 Nvme0n1 : 0.40 1915.80 119.74 159.65 0.00 29992.06 1802.24 27582.11 00:35:02.724 [2024-12-09T23:16:37.660Z] =================================================================================================================== 00:35:02.724 [2024-12-09T23:16:37.660Z] Total : 1915.80 119.74 159.65 0.00 29992.06 1802.24 27582.11 00:35:02.724 [2024-12-10 00:16:37.574393] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:02.724 [2024-12-10 00:16:37.574412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x137f120 (9): Bad file descriptor 00:35:02.724 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.724 00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:35:02.724 [2024-12-10 00:16:37.618213] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:35:03.660 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 555217 00:35:03.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh: line 91: kill: (555217) - No such process 00:35:03.660 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:35:03.660 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:35:03.660 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:35:03.660 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:35:03.660 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:35:03.660 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:35:03.660 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:03.660 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:03.660 { 00:35:03.660 "params": { 00:35:03.660 "name": "Nvme$subsystem", 00:35:03.660 "trtype": "$TEST_TRANSPORT", 00:35:03.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:03.660 "adrfam": "ipv4", 00:35:03.660 "trsvcid": "$NVMF_PORT", 00:35:03.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:03.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:03.660 "hdgst": ${hdgst:-false}, 00:35:03.660 "ddgst": ${ddgst:-false} 00:35:03.660 }, 00:35:03.660 "method": "bdev_nvme_attach_controller" 00:35:03.660 } 00:35:03.660 EOF 00:35:03.660 )") 00:35:03.660 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:35:03.919 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:35:03.919 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:35:03.919 00:16:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:03.919 "params": { 00:35:03.919 "name": "Nvme0", 00:35:03.919 "trtype": "tcp", 00:35:03.919 "traddr": "10.0.0.2", 00:35:03.919 "adrfam": "ipv4", 00:35:03.919 "trsvcid": "4420", 00:35:03.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:03.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:03.919 "hdgst": false, 00:35:03.919 "ddgst": false 00:35:03.919 }, 00:35:03.919 "method": "bdev_nvme_attach_controller" 00:35:03.919 }' 00:35:03.919 [2024-12-10 00:16:38.636009] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:35:03.919 [2024-12-10 00:16:38.636058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid555648 ] 00:35:03.919 [2024-12-10 00:16:38.709839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.919 [2024-12-10 00:16:38.749036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.177 Running I/O for 1 seconds... 00:35:05.113 1984.00 IOPS, 124.00 MiB/s 00:35:05.113 Latency(us) 00:35:05.113 [2024-12-09T23:16:40.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.113 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:35:05.113 Verification LBA range: start 0x0 length 0x400 00:35:05.113 Nvme0n1 : 1.02 2006.68 125.42 0.00 0.00 31390.98 5784.26 27468.13 00:35:05.113 [2024-12-09T23:16:40.049Z] =================================================================================================================== 00:35:05.113 [2024-12-09T23:16:40.049Z] Total : 2006.68 125.42 0.00 0.00 31390.98 5784.26 27468.13 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:05.372 rmmod nvme_tcp 00:35:05.372 rmmod nvme_fabrics 00:35:05.372 rmmod nvme_keyring 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 555156 ']' 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 555156 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 555156 ']' 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 555156 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 555156 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 555156' 00:35:05.372 killing process with pid 555156 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 555156 00:35:05.372 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 555156 00:35:05.631 [2024-12-10 00:16:40.411874] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:05.631 00:16:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:35:08.176 00:35:08.176 real 0m12.215s 00:35:08.176 user 0m17.645s 00:35:08.176 sys 0m6.235s 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:35:08.176 ************************************ 00:35:08.176 END TEST nvmf_host_management 00:35:08.176 ************************************ 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:08.176 ************************************ 00:35:08.176 START TEST nvmf_lvol 00:35:08.176 ************************************ 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:35:08.176 * Looking for test storage... 00:35:08.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:08.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.176 --rc genhtml_branch_coverage=1 00:35:08.176 --rc genhtml_function_coverage=1 00:35:08.176 --rc genhtml_legend=1 00:35:08.176 --rc geninfo_all_blocks=1 00:35:08.176 --rc geninfo_unexecuted_blocks=1 00:35:08.176 00:35:08.176 ' 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:08.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.176 --rc genhtml_branch_coverage=1 00:35:08.176 --rc genhtml_function_coverage=1 00:35:08.176 --rc genhtml_legend=1 00:35:08.176 --rc geninfo_all_blocks=1 00:35:08.176 --rc geninfo_unexecuted_blocks=1 00:35:08.176 00:35:08.176 ' 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:08.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.176 --rc genhtml_branch_coverage=1 00:35:08.176 --rc genhtml_function_coverage=1 00:35:08.176 --rc genhtml_legend=1 00:35:08.176 --rc geninfo_all_blocks=1 00:35:08.176 --rc geninfo_unexecuted_blocks=1 00:35:08.176 00:35:08.176 ' 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:08.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:08.176 --rc genhtml_branch_coverage=1 00:35:08.176 --rc genhtml_function_coverage=1 00:35:08.176 --rc genhtml_legend=1 00:35:08.176 --rc geninfo_all_blocks=1 00:35:08.176 --rc geninfo_unexecuted_blocks=1 00:35:08.176 00:35:08.176 ' 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.176 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:35:08.177 00:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:13.453 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:13.453 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.453 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:13.454 Found net devices under 0000:86:00.0: cvl_0_0 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:13.454 Found net devices under 0000:86:00.1: cvl_0_1 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:13.454 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:13.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:13.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:35:13.712 00:35:13.712 --- 10.0.0.2 ping statistics --- 00:35:13.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.712 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:13.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:13.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:35:13.712 00:35:13.712 --- 10.0.0.1 ping statistics --- 00:35:13.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.712 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:13.712 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=559324 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 559324 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 559324 ']' 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.971 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:13.971 [2024-12-10 00:16:48.700624] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:13.971 [2024-12-10 00:16:48.701576] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:35:13.971 [2024-12-10 00:16:48.701610] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.971 [2024-12-10 00:16:48.784306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:13.971 [2024-12-10 00:16:48.826897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.971 [2024-12-10 00:16:48.826934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.971 [2024-12-10 00:16:48.826942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.971 [2024-12-10 00:16:48.826948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.971 [2024-12-10 00:16:48.826954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.971 [2024-12-10 00:16:48.828319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.971 [2024-12-10 00:16:48.828342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.971 [2024-12-10 00:16:48.828343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:13.971 [2024-12-10 00:16:48.896443] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:13.971 [2024-12-10 00:16:48.897194] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:13.971 [2024-12-10 00:16:48.897347] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:13.971 [2024-12-10 00:16:48.897453] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:14.230 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.230 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:35:14.230 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:14.230 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:14.230 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:14.230 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:14.230 00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:14.230 [2024-12-10 00:16:49.141221] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.488 00:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:14.488 00:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:35:14.488 00:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:14.747 00:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:35:14.747 00:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:35:15.005 00:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:35:15.264 00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cc6d6830-aab1-4def-94e0-f7c6c4761bc4 00:35:15.264 00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u cc6d6830-aab1-4def-94e0-f7c6c4761bc4 lvol 20 00:35:15.523 00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4363aeea-b24b-491c-9c5e-c0a12067bcf5 00:35:15.523 00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:15.523 00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4363aeea-b24b-491c-9c5e-c0a12067bcf5 00:35:15.781 00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:16.039 [2024-12-10 00:16:50.809099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.039 00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:16.297 00:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=559696 00:35:16.297 00:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:35:16.297 00:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:35:17.232 00:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_snapshot 4363aeea-b24b-491c-9c5e-c0a12067bcf5 MY_SNAPSHOT 00:35:17.491 00:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=05c87500-812c-43f9-b3e8-387f7ab90874 00:35:17.491 00:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_resize 4363aeea-b24b-491c-9c5e-c0a12067bcf5 30 00:35:17.750 00:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_clone 05c87500-812c-43f9-b3e8-387f7ab90874 MY_CLONE 00:35:18.011 00:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=db98e025-a556-40f5-9f49-1ba32b204f58 00:35:18.011 00:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_inflate db98e025-a556-40f5-9f49-1ba32b204f58 00:35:18.578 00:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 559696 00:35:26.688 Initializing NVMe Controllers 00:35:26.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:26.688 Controller IO queue size 128, less than required. 00:35:26.688 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:26.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:35:26.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:35:26.688 Initialization complete. Launching workers. 00:35:26.688 ======================================================== 00:35:26.688 Latency(us) 00:35:26.688 Device Information : IOPS MiB/s Average min max 00:35:26.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12534.50 48.96 10216.12 4909.54 59138.15 00:35:26.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12424.00 48.53 10305.14 5766.53 52017.55 00:35:26.688 ======================================================== 00:35:26.688 Total : 24958.50 97.49 10260.43 4909.54 59138.15 00:35:26.688 00:35:26.688 00:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:26.946 00:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 4363aeea-b24b-491c-9c5e-c0a12067bcf5 00:35:27.205 00:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc6d6830-aab1-4def-94e0-f7c6c4761bc4 00:35:27.205 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:35:27.205 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:35:27.205 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:35:27.205 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:27.205 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:35:27.205 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:27.205 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:35:27.205 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:27.205 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:27.205 rmmod nvme_tcp 00:35:27.205 rmmod nvme_fabrics 00:35:27.205 rmmod nvme_keyring 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 559324 ']' 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 559324 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 559324 ']' 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 559324 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 559324 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 559324' 00:35:27.464 killing process with pid 559324 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 559324 00:35:27.464 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 559324 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:27.723 00:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.626 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:29.626 00:35:29.626 real 0m21.898s 00:35:29.626 user 0m55.991s 00:35:29.626 sys 0m9.756s 00:35:29.626 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:29.626 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:35:29.626 ************************************ 00:35:29.626 END TEST nvmf_lvol 00:35:29.626 ************************************ 00:35:29.626 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:29.626 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:29.626 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.626 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:29.886 ************************************ 00:35:29.886 START TEST nvmf_lvs_grow 00:35:29.886 ************************************ 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:35:29.886 * Looking for test storage... 00:35:29.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:29.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.886 --rc genhtml_branch_coverage=1 00:35:29.886 --rc genhtml_function_coverage=1 00:35:29.886 --rc genhtml_legend=1 00:35:29.886 --rc geninfo_all_blocks=1 00:35:29.886 --rc geninfo_unexecuted_blocks=1 00:35:29.886 00:35:29.886 ' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:29.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.886 --rc genhtml_branch_coverage=1 00:35:29.886 --rc genhtml_function_coverage=1 00:35:29.886 --rc genhtml_legend=1 00:35:29.886 --rc geninfo_all_blocks=1 00:35:29.886 --rc geninfo_unexecuted_blocks=1 00:35:29.886 00:35:29.886 ' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:29.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.886 --rc genhtml_branch_coverage=1 00:35:29.886 --rc genhtml_function_coverage=1 00:35:29.886 --rc genhtml_legend=1 00:35:29.886 --rc geninfo_all_blocks=1 00:35:29.886 --rc geninfo_unexecuted_blocks=1 00:35:29.886 00:35:29.886 ' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:29.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:29.886 --rc genhtml_branch_coverage=1 00:35:29.886 --rc genhtml_function_coverage=1 00:35:29.886 --rc genhtml_legend=1 00:35:29.886 --rc geninfo_all_blocks=1 00:35:29.886 --rc geninfo_unexecuted_blocks=1 00:35:29.886 00:35:29.886 ' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:29.886 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:29.887 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:29.887 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:29.887 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.887 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:29.887 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:29.887 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:29.887 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:29.887 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:35:29.887 00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:35:36.465 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:36.466 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:36.466 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:36.466 Found net devices under 0000:86:00.0: cvl_0_0 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:36.466 Found net devices under 0000:86:00.1: cvl_0_1 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:36.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:36.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:35:36.466 00:35:36.466 --- 10.0.0.2 ping statistics --- 00:35:36.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.466 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:36.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:36.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:35:36.466 00:35:36.466 --- 10.0.0.1 ping statistics --- 00:35:36.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:36.466 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:36.466 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=565047 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 565047 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 565047 ']' 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:36.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:36.467 [2024-12-10 00:17:10.712584] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:36.467 [2024-12-10 00:17:10.713558] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:35:36.467 [2024-12-10 00:17:10.713590] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:36.467 [2024-12-10 00:17:10.791388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.467 [2024-12-10 00:17:10.831452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.467 [2024-12-10 00:17:10.831489] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.467 [2024-12-10 00:17:10.831496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.467 [2024-12-10 00:17:10.831503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.467 [2024-12-10 00:17:10.831508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.467 [2024-12-10 00:17:10.832030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.467 [2024-12-10 00:17:10.899532] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:36.467 [2024-12-10 00:17:10.899728] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.467 00:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:36.467 [2024-12-10 00:17:11.136685] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:36.467 ************************************ 00:35:36.467 START TEST lvs_grow_clean 00:35:36.467 ************************************ 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:35:36.467 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:36.727 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:36.727 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:36.727 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:36.727 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:36.727 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:36.986 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:36.986 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:36.986 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a lvol 150 00:35:37.247 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a9b26c2f-71a3-475f-8566-8dd398933f57 00:35:37.248 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:35:37.248 00:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:37.248 [2024-12-10 00:17:12.172432] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:37.248 [2024-12-10 00:17:12.172563] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:37.248 true 00:35:37.506 00:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:37.506 00:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:37.506 00:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:37.506 00:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:37.765 00:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9b26c2f-71a3-475f-8566-8dd398933f57 00:35:38.024 00:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:38.024 [2024-12-10 00:17:12.940922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:38.283 00:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:38.283 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=565398 00:35:38.283 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:38.283 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:38.283 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 565398 /var/tmp/bdevperf.sock 00:35:38.283 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 565398 ']' 00:35:38.283 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:38.283 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:38.283 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:38.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:38.283 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:38.283 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:38.283 [2024-12-10 00:17:13.196383] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:35:38.283 [2024-12-10 00:17:13.196430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565398 ] 00:35:38.542 [2024-12-10 00:17:13.270725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:38.542 [2024-12-10 00:17:13.311953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.542 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:38.542 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:35:38.542 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:38.800 Nvme0n1 00:35:38.800 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:39.059 [ 00:35:39.059 { 00:35:39.059 "name": "Nvme0n1", 00:35:39.059 "aliases": [ 00:35:39.059 "a9b26c2f-71a3-475f-8566-8dd398933f57" 00:35:39.059 ], 00:35:39.059 "product_name": "NVMe disk", 00:35:39.059 "block_size": 4096, 00:35:39.059 "num_blocks": 38912, 00:35:39.059 "uuid": "a9b26c2f-71a3-475f-8566-8dd398933f57", 00:35:39.059 "numa_id": 1, 00:35:39.059 "assigned_rate_limits": { 00:35:39.059 "rw_ios_per_sec": 0, 00:35:39.059 "rw_mbytes_per_sec": 0, 00:35:39.059 "r_mbytes_per_sec": 0, 00:35:39.059 "w_mbytes_per_sec": 0 00:35:39.059 }, 00:35:39.059 "claimed": false, 00:35:39.059 "zoned": false, 00:35:39.059 "supported_io_types": { 00:35:39.059 "read": true, 00:35:39.059 "write": true, 00:35:39.059 "unmap": true, 00:35:39.059 "flush": true, 00:35:39.059 "reset": true, 00:35:39.059 "nvme_admin": true, 00:35:39.059 "nvme_io": true, 00:35:39.059 "nvme_io_md": false, 00:35:39.059 "write_zeroes": true, 00:35:39.059 "zcopy": false, 00:35:39.059 "get_zone_info": false, 00:35:39.059 "zone_management": false, 00:35:39.059 "zone_append": false, 00:35:39.059 "compare": true, 00:35:39.059 "compare_and_write": true, 00:35:39.059 "abort": true, 00:35:39.059 "seek_hole": false, 00:35:39.059 "seek_data": false, 00:35:39.059 "copy": true, 00:35:39.059 "nvme_iov_md": false 00:35:39.059 }, 00:35:39.059 "memory_domains": [ 00:35:39.059 { 00:35:39.059 "dma_device_id": "system", 00:35:39.059 "dma_device_type": 1 00:35:39.059 } 00:35:39.059 ], 00:35:39.059 "driver_specific": { 00:35:39.059 "nvme": [ 00:35:39.059 { 00:35:39.059 "trid": { 00:35:39.059 "trtype": "TCP", 00:35:39.059 "adrfam": "IPv4", 00:35:39.059 "traddr": "10.0.0.2", 00:35:39.059 "trsvcid": "4420", 00:35:39.059 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:39.059 }, 00:35:39.059 "ctrlr_data": { 00:35:39.059 "cntlid": 1, 00:35:39.059 "vendor_id": "0x8086", 00:35:39.059 "model_number": "SPDK bdev Controller", 00:35:39.059 "serial_number": "SPDK0", 00:35:39.059 "firmware_revision": "25.01", 00:35:39.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:39.059 "oacs": { 00:35:39.059 "security": 0, 00:35:39.059 "format": 0, 00:35:39.059 "firmware": 0, 00:35:39.059 "ns_manage": 0 00:35:39.059 }, 00:35:39.059 "multi_ctrlr": true, 00:35:39.059 "ana_reporting": false 00:35:39.059 }, 00:35:39.059 "vs": { 00:35:39.059 "nvme_version": "1.3" 00:35:39.059 }, 00:35:39.059 "ns_data": { 00:35:39.059 "id": 1, 00:35:39.059 "can_share": true 00:35:39.059 } 00:35:39.059 } 00:35:39.059 ], 00:35:39.059 "mp_policy": "active_passive" 00:35:39.059 } 00:35:39.059 } 00:35:39.059 ] 00:35:39.059 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:39.059 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=565560 00:35:39.059 00:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:39.059 Running I/O for 10 seconds... 00:35:39.994 Latency(us) 00:35:39.994 [2024-12-09T23:17:14.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:39.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:39.994 Nvme0n1 : 1.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:35:39.994 [2024-12-09T23:17:14.930Z] =================================================================================================================== 00:35:39.994 [2024-12-09T23:17:14.930Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:35:39.994 00:35:40.929 00:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:41.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:41.187 Nvme0n1 : 2.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:35:41.187 [2024-12-09T23:17:16.123Z] =================================================================================================================== 00:35:41.187 [2024-12-09T23:17:16.123Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:35:41.187 00:35:41.187 true 00:35:41.187 00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:41.187 00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:41.446 00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:41.446 00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:41.446 00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 565560 00:35:42.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:42.014 Nvme0n1 : 3.00 22817.67 89.13 0.00 0.00 0.00 0.00 0.00 00:35:42.014 [2024-12-09T23:17:16.950Z] =================================================================================================================== 00:35:42.014 [2024-12-09T23:17:16.950Z] Total : 22817.67 89.13 0.00 0.00 0.00 0.00 0.00 00:35:42.014 00:35:43.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:43.390 Nvme0n1 : 4.00 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:35:43.390 [2024-12-09T23:17:18.326Z] =================================================================================================================== 00:35:43.390 [2024-12-09T23:17:18.326Z] Total : 22923.50 89.54 0.00 0.00 0.00 0.00 0.00 00:35:43.390 00:35:44.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:44.327 Nvme0n1 : 5.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:35:44.327 [2024-12-09T23:17:19.263Z] =================================================================================================================== 00:35:44.327 [2024-12-09T23:17:19.263Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:35:44.327 00:35:45.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:45.263 Nvme0n1 : 6.00 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:35:45.263 [2024-12-09T23:17:20.199Z] =================================================================================================================== 00:35:45.263 [2024-12-09T23:17:20.199Z] Total : 23029.33 89.96 0.00 0.00 0.00 0.00 0.00 00:35:45.263 00:35:46.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:46.205 Nvme0n1 : 7.00 23007.57 89.87 0.00 0.00 0.00 0.00 0.00 00:35:46.205 [2024-12-09T23:17:21.141Z] =================================================================================================================== 00:35:46.205 [2024-12-09T23:17:21.141Z] Total : 23007.57 89.87 0.00 0.00 0.00 0.00 0.00 00:35:46.205 00:35:47.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:47.141 Nvme0n1 : 8.00 23036.75 89.99 0.00 0.00 0.00 0.00 0.00 00:35:47.141 [2024-12-09T23:17:22.077Z] =================================================================================================================== 00:35:47.141 [2024-12-09T23:17:22.077Z] Total : 23036.75 89.99 0.00 0.00 0.00 0.00 0.00 00:35:47.141 00:35:48.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:48.082 Nvme0n1 : 9.00 23073.56 90.13 0.00 0.00 0.00 0.00 0.00 00:35:48.082 [2024-12-09T23:17:23.018Z] =================================================================================================================== 00:35:48.082 [2024-12-09T23:17:23.018Z] Total : 23073.56 90.13 0.00 0.00 0.00 0.00 0.00 00:35:48.082 00:35:49.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:49.016 Nvme0n1 : 10.00 23090.30 90.20 0.00 0.00 0.00 0.00 0.00 00:35:49.016 [2024-12-09T23:17:23.952Z] =================================================================================================================== 00:35:49.016 [2024-12-09T23:17:23.952Z] Total : 23090.30 90.20 0.00 0.00 0.00 0.00 0.00 00:35:49.016 00:35:49.016 00:35:49.016 Latency(us) 00:35:49.016 [2024-12-09T23:17:23.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:49.016 Nvme0n1 : 10.00 23093.77 90.21 0.00 0.00 5539.43 3291.05 26328.38 00:35:49.016 [2024-12-09T23:17:23.952Z] =================================================================================================================== 00:35:49.016 [2024-12-09T23:17:23.952Z] Total : 23093.77 90.21 0.00 0.00 5539.43 3291.05 26328.38 00:35:49.016 { 00:35:49.016 "results": [ 00:35:49.016 { 00:35:49.016 "job": "Nvme0n1", 00:35:49.016 "core_mask": "0x2", 00:35:49.016 "workload": "randwrite", 00:35:49.016 "status": "finished", 00:35:49.016 "queue_depth": 128, 00:35:49.016 "io_size": 4096, 00:35:49.016 "runtime": 10.004039, 00:35:49.016 "iops": 23093.772425317413, 00:35:49.016 "mibps": 90.21004853639614, 00:35:49.016 "io_failed": 0, 00:35:49.016 "io_timeout": 0, 00:35:49.016 "avg_latency_us": 5539.433343667601, 00:35:49.016 "min_latency_us": 3291.046956521739, 00:35:49.016 "max_latency_us": 26328.375652173912 00:35:49.016 } 00:35:49.016 ], 00:35:49.016 "core_count": 1 00:35:49.016 } 00:35:49.016 00:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 565398 00:35:49.016 00:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 565398 ']' 00:35:49.016 00:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 565398 00:35:49.275 00:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:35:49.275 00:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.275 00:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 565398 00:35:49.275 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:49.275 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:49.275 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 565398' 00:35:49.275 killing process with pid 565398 00:35:49.275 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 565398 00:35:49.275 Received shutdown signal, test time was about 10.000000 seconds 00:35:49.275 00:35:49.275 Latency(us) 00:35:49.275 [2024-12-09T23:17:24.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.275 [2024-12-09T23:17:24.211Z] =================================================================================================================== 00:35:49.275 [2024-12-09T23:17:24.211Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:49.275 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 565398 00:35:49.275 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:49.534 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:49.793 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:49.793 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:35:50.051 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:35:50.051 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:35:50.051 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:50.051 [2024-12-10 00:17:24.948537] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:35:50.313 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:50.313 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:35:50.313 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:50.313 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:35:50.313 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.313 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:35:50.313 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.313 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:35:50.313 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.313 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:35:50.313 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:35:50.314 00:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:50.314 request: 00:35:50.314 { 00:35:50.314 "uuid": "8d20fafe-67e7-439a-8bf2-bc520c9bae4a", 00:35:50.314 "method": "bdev_lvol_get_lvstores", 00:35:50.314 "req_id": 1 00:35:50.314 } 00:35:50.314 Got JSON-RPC error response 00:35:50.314 response: 00:35:50.314 { 00:35:50.314 "code": -19, 00:35:50.314 "message": "No such device" 00:35:50.314 } 00:35:50.314 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:35:50.314 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:50.314 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:50.314 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:50.314 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:50.573 aio_bdev 00:35:50.573 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a9b26c2f-71a3-475f-8566-8dd398933f57 00:35:50.573 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a9b26c2f-71a3-475f-8566-8dd398933f57 00:35:50.573 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:50.573 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:35:50.573 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:50.573 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:50.573 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:50.831 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b a9b26c2f-71a3-475f-8566-8dd398933f57 -t 2000 00:35:50.831 [ 00:35:50.831 { 00:35:50.831 "name": "a9b26c2f-71a3-475f-8566-8dd398933f57", 00:35:50.831 "aliases": [ 00:35:50.831 "lvs/lvol" 00:35:50.831 ], 00:35:50.831 "product_name": "Logical Volume", 00:35:50.831 "block_size": 4096, 00:35:50.831 "num_blocks": 38912, 00:35:50.831 "uuid": "a9b26c2f-71a3-475f-8566-8dd398933f57", 00:35:50.831 "assigned_rate_limits": { 00:35:50.831 "rw_ios_per_sec": 0, 00:35:50.831 "rw_mbytes_per_sec": 0, 00:35:50.831 "r_mbytes_per_sec": 0, 00:35:50.831 "w_mbytes_per_sec": 0 00:35:50.831 }, 00:35:50.831 "claimed": false, 00:35:50.831 "zoned": false, 00:35:50.831 "supported_io_types": { 00:35:50.831 "read": true, 00:35:50.831 "write": true, 00:35:50.831 "unmap": true, 00:35:50.831 "flush": false, 00:35:50.831 "reset": true, 00:35:50.831 "nvme_admin": false, 00:35:50.831 "nvme_io": false, 00:35:50.831 "nvme_io_md": false, 00:35:50.831 "write_zeroes": true, 00:35:50.831 "zcopy": false, 00:35:50.831 "get_zone_info": false, 00:35:50.831 "zone_management": false, 00:35:50.831 "zone_append": false, 00:35:50.831 "compare": false, 00:35:50.831 "compare_and_write": false, 00:35:50.831 "abort": false, 00:35:50.831 "seek_hole": true, 00:35:50.831 "seek_data": true, 00:35:50.831 "copy": false, 00:35:50.831 "nvme_iov_md": false 00:35:50.831 }, 00:35:50.831 "driver_specific": { 00:35:50.831 "lvol": { 00:35:50.831 "lvol_store_uuid": "8d20fafe-67e7-439a-8bf2-bc520c9bae4a", 00:35:50.831 "base_bdev": "aio_bdev", 00:35:50.831 "thin_provision": false, 00:35:50.831 "num_allocated_clusters": 38, 00:35:50.831 "snapshot": false, 00:35:50.831 "clone": false, 00:35:50.831 "esnap_clone": false 00:35:50.831 } 00:35:50.831 } 00:35:50.831 } 00:35:50.831 ] 00:35:50.831 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:35:51.090 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:51.090 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:35:51.090 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:35:51.090 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:51.090 00:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:35:51.356 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:35:51.356 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete a9b26c2f-71a3-475f-8566-8dd398933f57 00:35:51.615 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d20fafe-67e7-439a-8bf2-bc520c9bae4a 00:35:51.874 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:35:51.874 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:35:51.874 00:35:51.874 real 0m15.594s 00:35:51.874 user 0m15.105s 00:35:51.874 sys 0m1.452s 00:35:51.874 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.874 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:35:51.874 ************************************ 00:35:51.874 END TEST lvs_grow_clean 00:35:51.874 ************************************ 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:35:52.133 ************************************ 00:35:52.133 START TEST lvs_grow_dirty 00:35:52.133 ************************************ 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:35:52.133 00:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:35:52.392 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:35:52.392 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:35:52.392 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:35:52.392 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:35:52.392 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:35:52.651 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:35:52.651 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:35:52.651 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 lvol 150 00:35:52.910 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6f896c37-35d0-4a57-8635-7aa9df7ef60c 00:35:52.910 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:35:52.910 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:35:53.168 [2024-12-10 00:17:27.888439] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:35:53.168 [2024-12-10 00:17:27.888572] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:35:53.168 true 00:35:53.168 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:35:53.168 00:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:35:53.427 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:35:53.427 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:53.427 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6f896c37-35d0-4a57-8635-7aa9df7ef60c 00:35:53.686 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:53.943 [2024-12-10 00:17:28.660898] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:53.943 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:53.943 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=567912 00:35:53.943 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:35:53.943 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:53.944 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 567912 /var/tmp/bdevperf.sock 00:35:53.944 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 567912 ']' 00:35:53.944 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:53.944 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:53.944 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:53.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:53.944 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:53.944 00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:35:54.202 [2024-12-10 00:17:28.902341] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:35:54.202 [2024-12-10 00:17:28.902391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567912 ] 00:35:54.202 [2024-12-10 00:17:28.976976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.202 [2024-12-10 00:17:29.018685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.202 00:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.202 00:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:35:54.202 00:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:35:54.461 Nvme0n1 00:35:54.461 00:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:35:54.720 [ 00:35:54.720 { 00:35:54.720 "name": "Nvme0n1", 00:35:54.720 "aliases": [ 00:35:54.720 "6f896c37-35d0-4a57-8635-7aa9df7ef60c" 00:35:54.720 ], 00:35:54.720 "product_name": "NVMe disk", 00:35:54.720 "block_size": 4096, 00:35:54.720 "num_blocks": 38912, 00:35:54.720 "uuid": "6f896c37-35d0-4a57-8635-7aa9df7ef60c", 00:35:54.720 "numa_id": 1, 00:35:54.720 "assigned_rate_limits": { 00:35:54.720 "rw_ios_per_sec": 0, 00:35:54.720 "rw_mbytes_per_sec": 0, 00:35:54.720 "r_mbytes_per_sec": 0, 00:35:54.720 "w_mbytes_per_sec": 0 00:35:54.720 }, 00:35:54.720 "claimed": false, 00:35:54.720 "zoned": false, 00:35:54.720 "supported_io_types": { 00:35:54.720 "read": true, 00:35:54.720 "write": true, 00:35:54.720 "unmap": true, 00:35:54.720 "flush": true, 00:35:54.720 "reset": true, 00:35:54.720 "nvme_admin": true, 00:35:54.720 "nvme_io": true, 00:35:54.720 "nvme_io_md": false, 00:35:54.720 "write_zeroes": true, 00:35:54.720 "zcopy": false, 00:35:54.720 "get_zone_info": false, 00:35:54.720 "zone_management": false, 00:35:54.720 "zone_append": false, 00:35:54.720 "compare": true, 00:35:54.720 "compare_and_write": true, 00:35:54.720 "abort": true, 00:35:54.720 "seek_hole": false, 00:35:54.720 "seek_data": false, 00:35:54.720 "copy": true, 00:35:54.720 "nvme_iov_md": false 00:35:54.720 }, 00:35:54.720 "memory_domains": [ 00:35:54.720 { 00:35:54.720 "dma_device_id": "system", 00:35:54.720 "dma_device_type": 1 00:35:54.720 } 00:35:54.720 ], 00:35:54.720 "driver_specific": { 00:35:54.720 "nvme": [ 00:35:54.720 { 00:35:54.720 "trid": { 00:35:54.720 "trtype": "TCP", 00:35:54.720 "adrfam": "IPv4", 00:35:54.720 "traddr": "10.0.0.2", 00:35:54.720 "trsvcid": "4420", 00:35:54.720 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:35:54.720 }, 00:35:54.720 "ctrlr_data": { 00:35:54.720 "cntlid": 1, 00:35:54.720 "vendor_id": "0x8086", 00:35:54.720 "model_number": "SPDK bdev Controller", 00:35:54.720 "serial_number": "SPDK0", 00:35:54.720 "firmware_revision": "25.01", 00:35:54.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.720 "oacs": { 00:35:54.720 "security": 0, 00:35:54.720 "format": 0, 00:35:54.720 "firmware": 0, 00:35:54.720 "ns_manage": 0 00:35:54.720 }, 00:35:54.720 "multi_ctrlr": true, 00:35:54.720 "ana_reporting": false 00:35:54.720 }, 00:35:54.720 "vs": { 00:35:54.720 "nvme_version": "1.3" 00:35:54.720 }, 00:35:54.720 "ns_data": { 00:35:54.720 "id": 1, 00:35:54.720 "can_share": true 00:35:54.720 } 00:35:54.720 } 00:35:54.720 ], 00:35:54.720 "mp_policy": "active_passive" 00:35:54.720 } 00:35:54.720 } 00:35:54.720 ] 00:35:54.720 00:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=568137 00:35:54.720 00:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:54.720 00:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:35:54.720 Running I/O for 10 seconds... 00:35:56.106 Latency(us) 00:35:56.106 [2024-12-09T23:17:31.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:56.106 Nvme0n1 : 1.00 22369.00 87.38 0.00 0.00 0.00 0.00 0.00 00:35:56.106 [2024-12-09T23:17:31.042Z] =================================================================================================================== 00:35:56.106 [2024-12-09T23:17:31.042Z] Total : 22369.00 87.38 0.00 0.00 0.00 0.00 0.00 00:35:56.106 00:35:56.672 00:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:35:56.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:56.931 Nvme0n1 : 2.00 22868.50 89.33 0.00 0.00 0.00 0.00 0.00 00:35:56.931 [2024-12-09T23:17:31.867Z] =================================================================================================================== 00:35:56.931 [2024-12-09T23:17:31.867Z] Total : 22868.50 89.33 0.00 0.00 0.00 0.00 0.00 00:35:56.931 00:35:56.931 true 00:35:56.931 00:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:35:56.931 00:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:35:57.190 00:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:35:57.190 00:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:35:57.190 00:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 568137 00:35:57.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:57.777 Nvme0n1 : 3.00 22992.67 89.82 0.00 0.00 0.00 0.00 0.00 00:35:57.777 [2024-12-09T23:17:32.713Z] =================================================================================================================== 00:35:57.777 [2024-12-09T23:17:32.713Z] Total : 22992.67 89.82 0.00 0.00 0.00 0.00 0.00 00:35:57.777 00:35:58.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:58.713 Nvme0n1 : 4.00 23118.25 90.31 0.00 0.00 0.00 0.00 0.00 00:35:58.713 [2024-12-09T23:17:33.649Z] =================================================================================================================== 00:35:58.713 [2024-12-09T23:17:33.649Z] Total : 23118.25 90.31 0.00 0.00 0.00 0.00 0.00 00:35:58.713 00:36:00.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:00.091 Nvme0n1 : 5.00 23193.60 90.60 0.00 0.00 0.00 0.00 0.00 00:36:00.091 [2024-12-09T23:17:35.027Z] =================================================================================================================== 00:36:00.091 [2024-12-09T23:17:35.027Z] Total : 23193.60 90.60 0.00 0.00 0.00 0.00 0.00 00:36:00.091 00:36:01.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:01.025 Nvme0n1 : 6.00 23243.83 90.80 0.00 0.00 0.00 0.00 0.00 00:36:01.025 [2024-12-09T23:17:35.961Z] =================================================================================================================== 00:36:01.025 [2024-12-09T23:17:35.961Z] Total : 23243.83 90.80 0.00 0.00 0.00 0.00 0.00 00:36:01.025 00:36:01.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:01.961 Nvme0n1 : 7.00 23279.71 90.94 0.00 0.00 0.00 0.00 0.00 00:36:01.961 [2024-12-09T23:17:36.897Z] =================================================================================================================== 00:36:01.961 [2024-12-09T23:17:36.897Z] Total : 23279.71 90.94 0.00 0.00 0.00 0.00 0.00 00:36:01.961 00:36:02.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:02.898 Nvme0n1 : 8.00 23322.50 91.10 0.00 0.00 0.00 0.00 0.00 00:36:02.898 [2024-12-09T23:17:37.834Z] =================================================================================================================== 00:36:02.898 [2024-12-09T23:17:37.834Z] Total : 23322.50 91.10 0.00 0.00 0.00 0.00 0.00 00:36:02.898 00:36:03.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:03.834 Nvme0n1 : 9.00 23341.67 91.18 0.00 0.00 0.00 0.00 0.00 00:36:03.834 [2024-12-09T23:17:38.770Z] =================================================================================================================== 00:36:03.834 [2024-12-09T23:17:38.770Z] Total : 23341.67 91.18 0.00 0.00 0.00 0.00 0.00 00:36:03.834 00:36:04.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:04.771 Nvme0n1 : 10.00 23357.00 91.24 0.00 0.00 0.00 0.00 0.00 00:36:04.771 [2024-12-09T23:17:39.707Z] =================================================================================================================== 00:36:04.771 [2024-12-09T23:17:39.707Z] Total : 23357.00 91.24 0.00 0.00 0.00 0.00 0.00 00:36:04.771 00:36:04.771 00:36:04.771 Latency(us) 00:36:04.771 [2024-12-09T23:17:39.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:04.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:04.771 Nvme0n1 : 10.00 23359.18 91.25 0.00 0.00 5476.49 3319.54 26442.35 00:36:04.771 [2024-12-09T23:17:39.707Z] =================================================================================================================== 00:36:04.771 [2024-12-09T23:17:39.707Z] Total : 23359.18 91.25 0.00 0.00 5476.49 3319.54 26442.35 00:36:04.771 { 00:36:04.771 "results": [ 00:36:04.771 { 00:36:04.771 "job": "Nvme0n1", 00:36:04.771 "core_mask": "0x2", 00:36:04.771 "workload": "randwrite", 00:36:04.771 "status": "finished", 00:36:04.771 "queue_depth": 128, 00:36:04.771 "io_size": 4096, 00:36:04.771 "runtime": 10.004547, 00:36:04.771 "iops": 23359.178581498993, 00:36:04.771 "mibps": 91.24679133398044, 00:36:04.771 "io_failed": 0, 00:36:04.771 "io_timeout": 0, 00:36:04.771 "avg_latency_us": 5476.485754584047, 00:36:04.771 "min_latency_us": 3319.5408695652172, 00:36:04.771 "max_latency_us": 26442.351304347827 00:36:04.771 } 00:36:04.771 ], 00:36:04.771 "core_count": 1 00:36:04.771 } 00:36:04.771 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 567912 00:36:04.771 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 567912 ']' 00:36:04.771 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 567912 00:36:04.771 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:36:04.771 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:04.771 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 567912 00:36:05.030 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:05.030 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:05.030 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 567912' 00:36:05.030 killing process with pid 567912 00:36:05.030 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 567912 00:36:05.030 Received shutdown signal, test time was about 10.000000 seconds 00:36:05.030 00:36:05.030 Latency(us) 00:36:05.030 [2024-12-09T23:17:39.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.030 [2024-12-09T23:17:39.966Z] =================================================================================================================== 00:36:05.030 [2024-12-09T23:17:39.966Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:05.030 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 567912 00:36:05.030 00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:05.290 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:05.553 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:36:05.553 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:36:05.553 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:36:05.553 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:36:05.553 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 565047 00:36:05.553 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 565047 00:36:05.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 565047 Killed "${NVMF_APP[@]}" "$@" 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=569787 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 569787 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 569787 ']' 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.812 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:05.812 [2024-12-10 00:17:40.575110] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:05.812 [2024-12-10 00:17:40.576019] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:05.812 [2024-12-10 00:17:40.576057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:05.812 [2024-12-10 00:17:40.655005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.812 [2024-12-10 00:17:40.694607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:05.812 [2024-12-10 00:17:40.694643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:05.812 [2024-12-10 00:17:40.694650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:05.812 [2024-12-10 00:17:40.694656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:05.812 [2024-12-10 00:17:40.694661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:05.812 [2024-12-10 00:17:40.695193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.071 [2024-12-10 00:17:40.763996] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:06.071 [2024-12-10 00:17:40.764219] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:06.071 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:06.071 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:36:06.071 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:06.071 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:06.071 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:06.071 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:06.071 00:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:06.330 [2024-12-10 00:17:41.008528] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:36:06.330 [2024-12-10 00:17:41.008718] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:36:06.330 [2024-12-10 00:17:41.008805] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:36:06.330 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:36:06.330 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6f896c37-35d0-4a57-8635-7aa9df7ef60c 00:36:06.330 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6f896c37-35d0-4a57-8635-7aa9df7ef60c 00:36:06.330 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:06.330 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:36:06.330 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:06.330 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:06.330 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:06.330 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 6f896c37-35d0-4a57-8635-7aa9df7ef60c -t 2000 00:36:06.590 [ 00:36:06.590 { 00:36:06.590 "name": "6f896c37-35d0-4a57-8635-7aa9df7ef60c", 00:36:06.590 "aliases": [ 00:36:06.590 "lvs/lvol" 00:36:06.590 ], 00:36:06.590 "product_name": "Logical Volume", 00:36:06.590 "block_size": 4096, 00:36:06.590 "num_blocks": 38912, 00:36:06.590 "uuid": "6f896c37-35d0-4a57-8635-7aa9df7ef60c", 00:36:06.590 "assigned_rate_limits": { 00:36:06.590 "rw_ios_per_sec": 0, 00:36:06.590 "rw_mbytes_per_sec": 0, 00:36:06.590 "r_mbytes_per_sec": 0, 00:36:06.590 "w_mbytes_per_sec": 0 00:36:06.590 }, 00:36:06.590 "claimed": false, 00:36:06.590 "zoned": false, 00:36:06.590 "supported_io_types": { 00:36:06.590 "read": true, 00:36:06.590 "write": true, 00:36:06.590 "unmap": true, 00:36:06.590 "flush": false, 00:36:06.590 "reset": true, 00:36:06.590 "nvme_admin": false, 00:36:06.590 "nvme_io": false, 00:36:06.590 "nvme_io_md": false, 00:36:06.590 "write_zeroes": true, 00:36:06.590 "zcopy": false, 00:36:06.590 "get_zone_info": false, 00:36:06.590 "zone_management": false, 00:36:06.590 "zone_append": false, 00:36:06.590 "compare": false, 00:36:06.590 "compare_and_write": false, 00:36:06.590 "abort": false, 00:36:06.590 "seek_hole": true, 00:36:06.590 "seek_data": true, 00:36:06.590 "copy": false, 00:36:06.590 "nvme_iov_md": false 00:36:06.590 }, 00:36:06.590 "driver_specific": { 00:36:06.590 "lvol": { 00:36:06.590 "lvol_store_uuid": "d0df4e03-3fd4-463c-9f91-dc2184f85f86", 00:36:06.590 "base_bdev": "aio_bdev", 00:36:06.590 "thin_provision": false, 00:36:06.590 "num_allocated_clusters": 38, 00:36:06.590 "snapshot": false, 00:36:06.590 "clone": false, 00:36:06.590 "esnap_clone": false 00:36:06.590 } 00:36:06.590 } 00:36:06.590 } 00:36:06.590 ] 00:36:06.590 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:36:06.590 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:36:06.590 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:36:06.849 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:36:06.849 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:36:06.849 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:36:07.108 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:36:07.108 00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:07.108 [2024-12-10 00:17:42.023651] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:36:07.367 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:36:07.367 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:36:07.367 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:36:07.367 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:36:07.367 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:36:07.368 request: 00:36:07.368 { 00:36:07.368 "uuid": "d0df4e03-3fd4-463c-9f91-dc2184f85f86", 00:36:07.368 "method": "bdev_lvol_get_lvstores", 00:36:07.368 "req_id": 1 00:36:07.368 } 00:36:07.368 Got JSON-RPC error response 00:36:07.368 response: 00:36:07.368 { 00:36:07.368 "code": -19, 00:36:07.368 "message": "No such device" 00:36:07.368 } 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:07.368 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:36:07.627 aio_bdev 00:36:07.627 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6f896c37-35d0-4a57-8635-7aa9df7ef60c 00:36:07.627 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6f896c37-35d0-4a57-8635-7aa9df7ef60c 00:36:07.627 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:07.627 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:36:07.627 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:07.627 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:07.627 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:07.886 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 6f896c37-35d0-4a57-8635-7aa9df7ef60c -t 2000 00:36:08.145 [ 00:36:08.145 { 00:36:08.145 "name": "6f896c37-35d0-4a57-8635-7aa9df7ef60c", 00:36:08.145 "aliases": [ 00:36:08.145 "lvs/lvol" 00:36:08.145 ], 00:36:08.145 "product_name": "Logical Volume", 00:36:08.145 "block_size": 4096, 00:36:08.145 "num_blocks": 38912, 00:36:08.145 "uuid": "6f896c37-35d0-4a57-8635-7aa9df7ef60c", 00:36:08.145 "assigned_rate_limits": { 00:36:08.145 "rw_ios_per_sec": 0, 00:36:08.145 "rw_mbytes_per_sec": 0, 00:36:08.145 "r_mbytes_per_sec": 0, 00:36:08.145 "w_mbytes_per_sec": 0 00:36:08.145 }, 00:36:08.145 "claimed": false, 00:36:08.145 "zoned": false, 00:36:08.145 "supported_io_types": { 00:36:08.145 "read": true, 00:36:08.145 "write": true, 00:36:08.145 "unmap": true, 00:36:08.145 "flush": false, 00:36:08.145 "reset": true, 00:36:08.145 "nvme_admin": false, 00:36:08.145 "nvme_io": false, 00:36:08.145 "nvme_io_md": false, 00:36:08.145 "write_zeroes": true, 00:36:08.145 "zcopy": false, 00:36:08.145 "get_zone_info": false, 00:36:08.145 "zone_management": false, 00:36:08.145 "zone_append": false, 00:36:08.145 "compare": false, 00:36:08.145 "compare_and_write": false, 00:36:08.145 "abort": false, 00:36:08.145 "seek_hole": true, 00:36:08.145 "seek_data": true, 00:36:08.145 "copy": false, 00:36:08.145 "nvme_iov_md": false 00:36:08.145 }, 00:36:08.145 "driver_specific": { 00:36:08.145 "lvol": { 00:36:08.145 "lvol_store_uuid": "d0df4e03-3fd4-463c-9f91-dc2184f85f86", 00:36:08.145 "base_bdev": "aio_bdev", 00:36:08.145 "thin_provision": false, 00:36:08.145 "num_allocated_clusters": 38, 00:36:08.145 "snapshot": false, 00:36:08.145 "clone": false, 00:36:08.145 "esnap_clone": false 00:36:08.145 } 00:36:08.145 } 00:36:08.145 } 00:36:08.145 ] 00:36:08.145 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:36:08.145 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:36:08.145 00:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:36:08.145 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:36:08.145 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:36:08.145 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:36:08.404 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:36:08.404 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 6f896c37-35d0-4a57-8635-7aa9df7ef60c 00:36:08.662 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d0df4e03-3fd4-463c-9f91-dc2184f85f86 00:36:08.921 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:36:08.921 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:36:09.181 00:36:09.181 real 0m17.005s 00:36:09.181 user 0m34.537s 00:36:09.181 sys 0m3.669s 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:36:09.181 ************************************ 00:36:09.181 END TEST lvs_grow_dirty 00:36:09.181 ************************************ 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:36:09.181 nvmf_trace.0 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:09.181 00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:09.181 rmmod nvme_tcp 00:36:09.181 rmmod nvme_fabrics 00:36:09.181 rmmod nvme_keyring 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 569787 ']' 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 569787 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 569787 ']' 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 569787 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 569787 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 569787' 00:36:09.181 killing process with pid 569787 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 569787 00:36:09.181 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 569787 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:09.440 00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:11.977 00:36:11.977 real 0m41.759s 00:36:11.977 user 0m52.179s 00:36:11.977 sys 0m9.963s 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:36:11.977 ************************************ 00:36:11.977 END TEST nvmf_lvs_grow 00:36:11.977 ************************************ 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:11.977 ************************************ 00:36:11.977 START TEST nvmf_bdev_io_wait 00:36:11.977 ************************************ 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:36:11.977 * Looking for test storage... 00:36:11.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.977 --rc genhtml_branch_coverage=1 00:36:11.977 --rc genhtml_function_coverage=1 00:36:11.977 --rc genhtml_legend=1 00:36:11.977 --rc geninfo_all_blocks=1 00:36:11.977 --rc geninfo_unexecuted_blocks=1 00:36:11.977 00:36:11.977 ' 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.977 --rc genhtml_branch_coverage=1 00:36:11.977 --rc genhtml_function_coverage=1 00:36:11.977 --rc genhtml_legend=1 00:36:11.977 --rc geninfo_all_blocks=1 00:36:11.977 --rc geninfo_unexecuted_blocks=1 00:36:11.977 00:36:11.977 ' 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.977 --rc genhtml_branch_coverage=1 00:36:11.977 --rc genhtml_function_coverage=1 00:36:11.977 --rc genhtml_legend=1 00:36:11.977 --rc geninfo_all_blocks=1 00:36:11.977 --rc geninfo_unexecuted_blocks=1 00:36:11.977 00:36:11.977 ' 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:11.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.977 --rc genhtml_branch_coverage=1 00:36:11.977 --rc genhtml_function_coverage=1 00:36:11.977 --rc genhtml_legend=1 00:36:11.977 --rc geninfo_all_blocks=1 00:36:11.977 --rc geninfo_unexecuted_blocks=1 00:36:11.977 00:36:11.977 ' 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:11.977 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:36:11.978 00:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:17.259 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:17.260 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:17.519 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:17.519 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:17.519 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:17.519 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:17.519 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:17.520 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:17.520 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:17.520 Found net devices under 0000:86:00.0: cvl_0_0 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:17.520 Found net devices under 0000:86:00.1: cvl_0_1 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:17.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:17.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:36:17.520 00:36:17.520 --- 10.0.0.2 ping statistics --- 00:36:17.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.520 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:17.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:17.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:36:17.520 00:36:17.520 --- 10.0.0.1 ping statistics --- 00:36:17.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:17.520 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:17.520 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=573967 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 573967 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 573967 ']' 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:17.780 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:17.780 [2024-12-10 00:17:52.547340] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:17.780 [2024-12-10 00:17:52.548347] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:17.780 [2024-12-10 00:17:52.548387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:17.780 [2024-12-10 00:17:52.627782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:17.780 [2024-12-10 00:17:52.670668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:17.780 [2024-12-10 00:17:52.670704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:17.780 [2024-12-10 00:17:52.670711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:17.781 [2024-12-10 00:17:52.670717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:17.781 [2024-12-10 00:17:52.670722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:17.781 [2024-12-10 00:17:52.672127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.781 [2024-12-10 00:17:52.672280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.781 [2024-12-10 00:17:52.672240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:17.781 [2024-12-10 00:17:52.672281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:17.781 [2024-12-10 00:17:52.672710] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:17.781 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:17.781 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:36:17.781 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:17.781 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:17.781 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:18.046 [2024-12-10 00:17:52.822806] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:18.046 [2024-12-10 00:17:52.822907] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:18.046 [2024-12-10 00:17:52.823369] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:18.046 [2024-12-10 00:17:52.823609] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:18.046 [2024-12-10 00:17:52.833113] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:18.046 Malloc0 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:18.046 [2024-12-10 00:17:52.909388] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=574046 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=574048 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:18.046 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:18.046 { 00:36:18.046 "params": { 00:36:18.046 "name": "Nvme$subsystem", 00:36:18.046 "trtype": "$TEST_TRANSPORT", 00:36:18.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:18.046 "adrfam": "ipv4", 00:36:18.046 "trsvcid": "$NVMF_PORT", 00:36:18.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:18.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:18.046 "hdgst": ${hdgst:-false}, 00:36:18.046 "ddgst": ${ddgst:-false} 00:36:18.046 }, 00:36:18.046 "method": "bdev_nvme_attach_controller" 00:36:18.047 } 00:36:18.047 EOF 00:36:18.047 )") 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=574050 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=574053 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:18.047 { 00:36:18.047 "params": { 00:36:18.047 "name": "Nvme$subsystem", 00:36:18.047 "trtype": "$TEST_TRANSPORT", 00:36:18.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:18.047 "adrfam": "ipv4", 00:36:18.047 "trsvcid": "$NVMF_PORT", 00:36:18.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:18.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:18.047 "hdgst": ${hdgst:-false}, 00:36:18.047 "ddgst": ${ddgst:-false} 00:36:18.047 }, 00:36:18.047 "method": "bdev_nvme_attach_controller" 00:36:18.047 } 00:36:18.047 EOF 00:36:18.047 )") 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:18.047 { 00:36:18.047 "params": { 00:36:18.047 "name": "Nvme$subsystem", 00:36:18.047 "trtype": "$TEST_TRANSPORT", 00:36:18.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:18.047 "adrfam": "ipv4", 00:36:18.047 "trsvcid": "$NVMF_PORT", 00:36:18.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:18.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:18.047 "hdgst": ${hdgst:-false}, 00:36:18.047 "ddgst": ${ddgst:-false} 00:36:18.047 }, 00:36:18.047 "method": "bdev_nvme_attach_controller" 00:36:18.047 } 00:36:18.047 EOF 00:36:18.047 )") 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:18.047 { 00:36:18.047 "params": { 00:36:18.047 "name": "Nvme$subsystem", 00:36:18.047 "trtype": "$TEST_TRANSPORT", 00:36:18.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:18.047 "adrfam": "ipv4", 00:36:18.047 "trsvcid": "$NVMF_PORT", 00:36:18.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:18.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:18.047 "hdgst": ${hdgst:-false}, 00:36:18.047 "ddgst": ${ddgst:-false} 00:36:18.047 }, 00:36:18.047 "method": "bdev_nvme_attach_controller" 00:36:18.047 } 00:36:18.047 EOF 00:36:18.047 )") 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 574046 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:18.047 "params": { 00:36:18.047 "name": "Nvme1", 00:36:18.047 "trtype": "tcp", 00:36:18.047 "traddr": "10.0.0.2", 00:36:18.047 "adrfam": "ipv4", 00:36:18.047 "trsvcid": "4420", 00:36:18.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:18.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:18.047 "hdgst": false, 00:36:18.047 "ddgst": false 00:36:18.047 }, 00:36:18.047 "method": "bdev_nvme_attach_controller" 00:36:18.047 }' 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:36:18.047 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:18.047 "params": { 00:36:18.047 "name": "Nvme1", 00:36:18.047 "trtype": "tcp", 00:36:18.047 "traddr": "10.0.0.2", 00:36:18.047 "adrfam": "ipv4", 00:36:18.047 "trsvcid": "4420", 00:36:18.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:18.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:18.048 "hdgst": false, 00:36:18.048 "ddgst": false 00:36:18.048 }, 00:36:18.048 "method": "bdev_nvme_attach_controller" 00:36:18.048 }' 00:36:18.048 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:18.048 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:18.048 "params": { 00:36:18.048 "name": "Nvme1", 00:36:18.048 "trtype": "tcp", 00:36:18.048 "traddr": "10.0.0.2", 00:36:18.048 "adrfam": "ipv4", 00:36:18.048 "trsvcid": "4420", 00:36:18.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:18.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:18.048 "hdgst": false, 00:36:18.048 "ddgst": false 00:36:18.048 }, 00:36:18.048 "method": "bdev_nvme_attach_controller" 00:36:18.048 }' 00:36:18.048 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:36:18.048 00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:18.048 "params": { 00:36:18.048 "name": "Nvme1", 00:36:18.048 "trtype": "tcp", 00:36:18.048 "traddr": "10.0.0.2", 00:36:18.048 "adrfam": "ipv4", 00:36:18.048 "trsvcid": "4420", 00:36:18.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:18.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:18.048 "hdgst": false, 00:36:18.048 "ddgst": false 00:36:18.048 }, 00:36:18.048 "method": "bdev_nvme_attach_controller" 00:36:18.048 }' 00:36:18.048 [2024-12-10 00:17:52.960516] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:18.048 [2024-12-10 00:17:52.960561] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:36:18.048 [2024-12-10 00:17:52.961954] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:18.048 [2024-12-10 00:17:52.962006] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:36:18.048 [2024-12-10 00:17:52.963071] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:18.048 [2024-12-10 00:17:52.963113] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:36:18.048 [2024-12-10 00:17:52.967204] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:18.048 [2024-12-10 00:17:52.967246] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:36:18.309 [2024-12-10 00:17:53.154071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.309 [2024-12-10 00:17:53.196953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:18.568 [2024-12-10 00:17:53.255557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.568 [2024-12-10 00:17:53.297234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.568 [2024-12-10 00:17:53.304859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:18.568 [2024-12-10 00:17:53.339045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:18.568 [2024-12-10 00:17:53.357251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.568 [2024-12-10 00:17:53.397886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:18.833 Running I/O for 1 seconds... 00:36:18.833 Running I/O for 1 seconds... 00:36:18.833 Running I/O for 1 seconds... 00:36:18.833 Running I/O for 1 seconds... 00:36:19.770 12757.00 IOPS, 49.83 MiB/s 00:36:19.770 Latency(us) 00:36:19.770 [2024-12-09T23:17:54.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.770 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:36:19.770 Nvme1n1 : 1.01 12801.11 50.00 0.00 0.00 9964.56 3362.28 12252.38 00:36:19.770 [2024-12-09T23:17:54.706Z] =================================================================================================================== 00:36:19.770 [2024-12-09T23:17:54.706Z] Total : 12801.11 50.00 0.00 0.00 9964.56 3362.28 12252.38 00:36:19.770 10222.00 IOPS, 39.93 MiB/s 00:36:19.770 Latency(us) 00:36:19.770 [2024-12-09T23:17:54.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.770 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:36:19.770 Nvme1n1 : 1.01 10298.47 40.23 0.00 0.00 12389.13 1731.01 15272.74 00:36:19.770 [2024-12-09T23:17:54.706Z] =================================================================================================================== 00:36:19.770 [2024-12-09T23:17:54.706Z] Total : 10298.47 40.23 0.00 0.00 12389.13 1731.01 15272.74 00:36:19.770 237400.00 IOPS, 927.34 MiB/s 00:36:19.770 Latency(us) 00:36:19.770 [2024-12-09T23:17:54.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.770 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:36:19.770 Nvme1n1 : 1.00 237035.85 925.92 0.00 0.00 537.70 229.73 1538.67 00:36:19.770 [2024-12-09T23:17:54.706Z] =================================================================================================================== 00:36:19.770 [2024-12-09T23:17:54.706Z] Total : 237035.85 925.92 0.00 0.00 537.70 229.73 1538.67 00:36:19.770 11588.00 IOPS, 45.27 MiB/s 00:36:19.770 Latency(us) 00:36:19.770 [2024-12-09T23:17:54.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.770 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:36:19.770 Nvme1n1 : 1.00 11680.63 45.63 0.00 0.00 10934.04 2008.82 16526.47 00:36:19.770 [2024-12-09T23:17:54.706Z] =================================================================================================================== 00:36:19.770 [2024-12-09T23:17:54.706Z] Total : 11680.63 45.63 0.00 0.00 10934.04 2008.82 16526.47 00:36:20.029 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 574048 00:36:20.029 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 574050 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 574053 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:20.030 rmmod nvme_tcp 00:36:20.030 rmmod nvme_fabrics 00:36:20.030 rmmod nvme_keyring 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 573967 ']' 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 573967 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 573967 ']' 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 573967 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 573967 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 573967' 00:36:20.030 killing process with pid 573967 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 573967 00:36:20.030 00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 573967 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:20.288 00:17:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.824 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:22.824 00:36:22.824 real 0m10.758s 00:36:22.824 user 0m15.176s 00:36:22.824 sys 0m6.502s 00:36:22.824 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:22.824 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:36:22.824 ************************************ 00:36:22.824 END TEST nvmf_bdev_io_wait 00:36:22.824 ************************************ 00:36:22.824 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:22.824 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:22.824 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:22.824 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:22.824 ************************************ 00:36:22.824 START TEST nvmf_queue_depth 00:36:22.824 ************************************ 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:36:22.825 * Looking for test storage... 00:36:22.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:22.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.825 --rc genhtml_branch_coverage=1 00:36:22.825 --rc genhtml_function_coverage=1 00:36:22.825 --rc genhtml_legend=1 00:36:22.825 --rc geninfo_all_blocks=1 00:36:22.825 --rc geninfo_unexecuted_blocks=1 00:36:22.825 00:36:22.825 ' 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:22.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.825 --rc genhtml_branch_coverage=1 00:36:22.825 --rc genhtml_function_coverage=1 00:36:22.825 --rc genhtml_legend=1 00:36:22.825 --rc geninfo_all_blocks=1 00:36:22.825 --rc geninfo_unexecuted_blocks=1 00:36:22.825 00:36:22.825 ' 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:22.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.825 --rc genhtml_branch_coverage=1 00:36:22.825 --rc genhtml_function_coverage=1 00:36:22.825 --rc genhtml_legend=1 00:36:22.825 --rc geninfo_all_blocks=1 00:36:22.825 --rc geninfo_unexecuted_blocks=1 00:36:22.825 00:36:22.825 ' 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:22.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:22.825 --rc genhtml_branch_coverage=1 00:36:22.825 --rc genhtml_function_coverage=1 00:36:22.825 --rc genhtml_legend=1 00:36:22.825 --rc geninfo_all_blocks=1 00:36:22.825 --rc geninfo_unexecuted_blocks=1 00:36:22.825 00:36:22.825 ' 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:22.825 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:36:22.826 00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:28.099 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:28.099 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:36:28.099 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:28.099 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:28.099 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:28.099 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:28.099 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:28.099 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:28.100 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:28.100 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:28.100 00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:28.100 Found net devices under 0000:86:00.0: cvl_0_0 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:28.100 Found net devices under 0000:86:00.1: cvl_0_1 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:28.100 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:28.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:28.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:36:28.359 00:36:28.359 --- 10.0.0.2 ping statistics --- 00:36:28.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.359 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:28.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:28.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:36:28.359 00:36:28.359 --- 10.0.0.1 ping statistics --- 00:36:28.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.359 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:28.359 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:28.360 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:28.360 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:28.360 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:28.360 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=577952 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 577952 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 577952 ']' 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.618 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:28.618 [2024-12-10 00:18:03.353679] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:28.618 [2024-12-10 00:18:03.354678] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:28.618 [2024-12-10 00:18:03.354717] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.618 [2024-12-10 00:18:03.436720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.618 [2024-12-10 00:18:03.476984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.618 [2024-12-10 00:18:03.477016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.618 [2024-12-10 00:18:03.477024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.618 [2024-12-10 00:18:03.477031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.618 [2024-12-10 00:18:03.477037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.618 [2024-12-10 00:18:03.477542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:28.618 [2024-12-10 00:18:03.546232] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:28.618 [2024-12-10 00:18:03.546433] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:28.877 [2024-12-10 00:18:03.614238] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:28.877 Malloc0 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:28.877 [2024-12-10 00:18:03.690374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.877 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=577971 00:36:28.878 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:36:28.878 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:28.878 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 577971 /var/tmp/bdevperf.sock 00:36:28.878 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 577971 ']' 00:36:28.878 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:28.878 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.878 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:28.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:28.878 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.878 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:28.878 [2024-12-10 00:18:03.742174] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:28.878 [2024-12-10 00:18:03.742238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577971 ] 00:36:29.136 [2024-12-10 00:18:03.820543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.136 [2024-12-10 00:18:03.863003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:29.136 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:29.136 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:36:29.136 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:36:29.136 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.136 00:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:29.136 NVMe0n1 00:36:29.136 00:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.136 00:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:36:29.395 Running I/O for 10 seconds... 00:36:31.267 11466.00 IOPS, 44.79 MiB/s [2024-12-09T23:18:07.144Z] 11925.50 IOPS, 46.58 MiB/s [2024-12-09T23:18:08.521Z] 12134.00 IOPS, 47.40 MiB/s [2024-12-09T23:18:09.458Z] 12115.75 IOPS, 47.33 MiB/s [2024-12-09T23:18:10.395Z] 12144.40 IOPS, 47.44 MiB/s [2024-12-09T23:18:11.331Z] 12169.50 IOPS, 47.54 MiB/s [2024-12-09T23:18:12.268Z] 12155.86 IOPS, 47.48 MiB/s [2024-12-09T23:18:13.204Z] 12179.88 IOPS, 47.58 MiB/s [2024-12-09T23:18:14.588Z] 12177.33 IOPS, 47.57 MiB/s [2024-12-09T23:18:14.588Z] 12216.00 IOPS, 47.72 MiB/s 00:36:39.652 Latency(us) 00:36:39.652 [2024-12-09T23:18:14.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.652 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:36:39.652 Verification LBA range: start 0x0 length 0x4000 00:36:39.652 NVMe0n1 : 10.05 12247.68 47.84 0.00 0.00 83302.42 10827.69 53568.56 00:36:39.652 [2024-12-09T23:18:14.588Z] =================================================================================================================== 00:36:39.652 [2024-12-09T23:18:14.588Z] Total : 12247.68 47.84 0.00 0.00 83302.42 10827.69 53568.56 00:36:39.652 { 00:36:39.652 "results": [ 00:36:39.652 { 00:36:39.652 "job": "NVMe0n1", 00:36:39.652 "core_mask": "0x1", 00:36:39.652 "workload": "verify", 00:36:39.652 "status": "finished", 00:36:39.652 "verify_range": { 00:36:39.652 "start": 0, 00:36:39.652 "length": 16384 00:36:39.652 }, 00:36:39.652 "queue_depth": 1024, 00:36:39.652 "io_size": 4096, 00:36:39.652 "runtime": 10.052596, 00:36:39.652 "iops": 12247.682091272742, 00:36:39.652 "mibps": 47.84250816903415, 00:36:39.652 "io_failed": 0, 00:36:39.652 "io_timeout": 0, 00:36:39.652 "avg_latency_us": 83302.41891579969, 00:36:39.652 "min_latency_us": 10827.686956521738, 00:36:39.652 "max_latency_us": 53568.556521739134 00:36:39.652 } 00:36:39.652 ], 00:36:39.652 "core_count": 1 00:36:39.652 } 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 577971 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 577971 ']' 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 577971 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 577971 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 577971' 00:36:39.652 killing process with pid 577971 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 577971 00:36:39.652 Received shutdown signal, test time was about 10.000000 seconds 00:36:39.652 00:36:39.652 Latency(us) 00:36:39.652 [2024-12-09T23:18:14.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.652 [2024-12-09T23:18:14.588Z] =================================================================================================================== 00:36:39.652 [2024-12-09T23:18:14.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 577971 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:39.652 rmmod nvme_tcp 00:36:39.652 rmmod nvme_fabrics 00:36:39.652 rmmod nvme_keyring 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 577952 ']' 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 577952 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 577952 ']' 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 577952 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 577952 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 577952' 00:36:39.652 killing process with pid 577952 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 577952 00:36:39.652 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 577952 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.912 00:18:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.446 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:42.446 00:36:42.446 real 0m19.597s 00:36:42.446 user 0m22.586s 00:36:42.446 sys 0m6.272s 00:36:42.446 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:42.446 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:36:42.446 ************************************ 00:36:42.446 END TEST nvmf_queue_depth 00:36:42.446 ************************************ 00:36:42.446 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:36:42.446 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:42.446 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:42.446 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:42.446 ************************************ 00:36:42.446 START TEST nvmf_target_multipath 00:36:42.446 ************************************ 00:36:42.446 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:36:42.446 * Looking for test storage... 00:36:42.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:36:42.446 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:42.446 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:36:42.446 00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:42.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.447 --rc genhtml_branch_coverage=1 00:36:42.447 --rc genhtml_function_coverage=1 00:36:42.447 --rc genhtml_legend=1 00:36:42.447 --rc geninfo_all_blocks=1 00:36:42.447 --rc geninfo_unexecuted_blocks=1 00:36:42.447 00:36:42.447 ' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:42.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.447 --rc genhtml_branch_coverage=1 00:36:42.447 --rc genhtml_function_coverage=1 00:36:42.447 --rc genhtml_legend=1 00:36:42.447 --rc geninfo_all_blocks=1 00:36:42.447 --rc geninfo_unexecuted_blocks=1 00:36:42.447 00:36:42.447 ' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:42.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.447 --rc genhtml_branch_coverage=1 00:36:42.447 --rc genhtml_function_coverage=1 00:36:42.447 --rc genhtml_legend=1 00:36:42.447 --rc geninfo_all_blocks=1 00:36:42.447 --rc geninfo_unexecuted_blocks=1 00:36:42.447 00:36:42.447 ' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:42.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:42.447 --rc genhtml_branch_coverage=1 00:36:42.447 --rc genhtml_function_coverage=1 00:36:42.447 --rc genhtml_legend=1 00:36:42.447 --rc geninfo_all_blocks=1 00:36:42.447 --rc geninfo_unexecuted_blocks=1 00:36:42.447 00:36:42.447 ' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:42.447 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:36:42.448 00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:49.017 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:49.018 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:49.018 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:49.018 Found net devices under 0000:86:00.0: cvl_0_0 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:49.018 Found net devices under 0000:86:00.1: cvl_0_1 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:49.018 00:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:49.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:49.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:36:49.018 00:36:49.018 --- 10.0.0.2 ping statistics --- 00:36:49.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.018 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:49.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:49.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:36:49.018 00:36:49.018 --- 10.0.0.1 ping statistics --- 00:36:49.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:49.018 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:36:49.018 only one NIC for nvmf test 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:49.018 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:49.018 rmmod nvme_tcp 00:36:49.018 rmmod nvme_fabrics 00:36:49.018 rmmod nvme_keyring 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:49.019 00:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:50.395 00:36:50.395 real 0m8.353s 00:36:50.395 user 0m1.839s 00:36:50.395 sys 0m4.461s 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:36:50.395 ************************************ 00:36:50.395 END TEST nvmf_target_multipath 00:36:50.395 ************************************ 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:50.395 ************************************ 00:36:50.395 START TEST nvmf_zcopy 00:36:50.395 ************************************ 00:36:50.395 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:36:50.654 * Looking for test storage... 00:36:50.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:50.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.654 --rc genhtml_branch_coverage=1 00:36:50.654 --rc genhtml_function_coverage=1 00:36:50.654 --rc genhtml_legend=1 00:36:50.654 --rc geninfo_all_blocks=1 00:36:50.654 --rc geninfo_unexecuted_blocks=1 00:36:50.654 00:36:50.654 ' 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:50.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.654 --rc genhtml_branch_coverage=1 00:36:50.654 --rc genhtml_function_coverage=1 00:36:50.654 --rc genhtml_legend=1 00:36:50.654 --rc geninfo_all_blocks=1 00:36:50.654 --rc geninfo_unexecuted_blocks=1 00:36:50.654 00:36:50.654 ' 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:50.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.654 --rc genhtml_branch_coverage=1 00:36:50.654 --rc genhtml_function_coverage=1 00:36:50.654 --rc genhtml_legend=1 00:36:50.654 --rc geninfo_all_blocks=1 00:36:50.654 --rc geninfo_unexecuted_blocks=1 00:36:50.654 00:36:50.654 ' 00:36:50.654 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:50.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:50.655 --rc genhtml_branch_coverage=1 00:36:50.655 --rc genhtml_function_coverage=1 00:36:50.655 --rc genhtml_legend=1 00:36:50.655 --rc geninfo_all_blocks=1 00:36:50.655 --rc geninfo_unexecuted_blocks=1 00:36:50.655 00:36:50.655 ' 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:36:50.655 00:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:57.245 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:57.245 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:57.245 Found net devices under 0000:86:00.0: cvl_0_0 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:57.245 Found net devices under 0000:86:00.1: cvl_0_1 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:57.245 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:57.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:57.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:36:57.246 00:36:57.246 --- 10.0.0.2 ping statistics --- 00:36:57.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.246 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:57.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:57.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:36:57.246 00:36:57.246 --- 10.0.0.1 ping statistics --- 00:36:57.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.246 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=587027 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 587027 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 587027 ']' 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:57.246 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:57.246 [2024-12-10 00:18:31.505687] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:57.246 [2024-12-10 00:18:31.506586] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:57.246 [2024-12-10 00:18:31.506618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:57.246 [2024-12-10 00:18:31.587389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.246 [2024-12-10 00:18:31.627294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:57.246 [2024-12-10 00:18:31.627329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:57.246 [2024-12-10 00:18:31.627336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:57.246 [2024-12-10 00:18:31.627342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:57.246 [2024-12-10 00:18:31.627347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:57.246 [2024-12-10 00:18:31.627872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.246 [2024-12-10 00:18:31.695150] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:57.247 [2024-12-10 00:18:31.695375] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:57.247 [2024-12-10 00:18:31.764592] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:57.247 [2024-12-10 00:18:31.788794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:57.247 malloc0 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:57.247 { 00:36:57.247 "params": { 00:36:57.247 "name": "Nvme$subsystem", 00:36:57.247 "trtype": "$TEST_TRANSPORT", 00:36:57.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:57.247 "adrfam": "ipv4", 00:36:57.247 "trsvcid": "$NVMF_PORT", 00:36:57.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:57.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:57.247 "hdgst": ${hdgst:-false}, 00:36:57.247 "ddgst": ${ddgst:-false} 00:36:57.247 }, 00:36:57.247 "method": "bdev_nvme_attach_controller" 00:36:57.247 } 00:36:57.247 EOF 00:36:57.247 )") 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:36:57.247 00:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:57.247 "params": { 00:36:57.247 "name": "Nvme1", 00:36:57.247 "trtype": "tcp", 00:36:57.247 "traddr": "10.0.0.2", 00:36:57.247 "adrfam": "ipv4", 00:36:57.247 "trsvcid": "4420", 00:36:57.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:57.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:57.247 "hdgst": false, 00:36:57.247 "ddgst": false 00:36:57.247 }, 00:36:57.247 "method": "bdev_nvme_attach_controller" 00:36:57.247 }' 00:36:57.247 [2024-12-10 00:18:31.877961] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:57.247 [2024-12-10 00:18:31.878003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587185 ] 00:36:57.247 [2024-12-10 00:18:31.952380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.247 [2024-12-10 00:18:31.993471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.507 Running I/O for 10 seconds... 00:36:59.392 8293.00 IOPS, 64.79 MiB/s [2024-12-09T23:18:35.264Z] 8361.50 IOPS, 65.32 MiB/s [2024-12-09T23:18:36.200Z] 8400.00 IOPS, 65.62 MiB/s [2024-12-09T23:18:37.575Z] 8406.00 IOPS, 65.67 MiB/s [2024-12-09T23:18:38.514Z] 8419.40 IOPS, 65.78 MiB/s [2024-12-09T23:18:39.451Z] 8421.83 IOPS, 65.80 MiB/s [2024-12-09T23:18:40.386Z] 8429.29 IOPS, 65.85 MiB/s [2024-12-09T23:18:41.321Z] 8428.75 IOPS, 65.85 MiB/s [2024-12-09T23:18:42.257Z] 8424.00 IOPS, 65.81 MiB/s [2024-12-09T23:18:42.257Z] 8426.30 IOPS, 65.83 MiB/s 00:37:07.321 Latency(us) 00:37:07.321 [2024-12-09T23:18:42.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.321 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:37:07.321 Verification LBA range: start 0x0 length 0x1000 00:37:07.321 Nvme1n1 : 10.01 8428.37 65.85 0.00 0.00 15143.23 2607.19 21427.42 00:37:07.321 [2024-12-09T23:18:42.257Z] =================================================================================================================== 00:37:07.321 [2024-12-09T23:18:42.257Z] Total : 8428.37 65.85 0.00 0.00 15143.23 2607.19 21427.42 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=588864 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:07.642 { 00:37:07.642 "params": { 00:37:07.642 "name": "Nvme$subsystem", 00:37:07.642 "trtype": "$TEST_TRANSPORT", 00:37:07.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:07.642 "adrfam": "ipv4", 00:37:07.642 "trsvcid": "$NVMF_PORT", 00:37:07.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:07.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:07.642 "hdgst": ${hdgst:-false}, 00:37:07.642 "ddgst": ${ddgst:-false} 00:37:07.642 }, 00:37:07.642 "method": "bdev_nvme_attach_controller" 00:37:07.642 } 00:37:07.642 EOF 00:37:07.642 )") 00:37:07.642 [2024-12-10 00:18:42.388210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.642 [2024-12-10 00:18:42.388248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:37:07.642 00:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:07.642 "params": { 00:37:07.642 "name": "Nvme1", 00:37:07.643 "trtype": "tcp", 00:37:07.643 "traddr": "10.0.0.2", 00:37:07.643 "adrfam": "ipv4", 00:37:07.643 "trsvcid": "4420", 00:37:07.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:07.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:07.643 "hdgst": false, 00:37:07.643 "ddgst": false 00:37:07.643 }, 00:37:07.643 "method": "bdev_nvme_attach_controller" 00:37:07.643 }' 00:37:07.643 [2024-12-10 00:18:42.400179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.400192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.412179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.412189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.424175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.424184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.431478] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:37:07.643 [2024-12-10 00:18:42.431527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588864 ] 00:37:07.643 [2024-12-10 00:18:42.436177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.436189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.448175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.448185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.460174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.460184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.472173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.472182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.484174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.484184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.496174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.496184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.506131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.643 [2024-12-10 00:18:42.508175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.508184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.520178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.520194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.532184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.532201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.544174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.544200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.552271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.643 [2024-12-10 00:18:42.556175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.556186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.643 [2024-12-10 00:18:42.568183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.643 [2024-12-10 00:18:42.568199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.580177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.580192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.592176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.592189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.604177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.604188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.616177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.616188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.628175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.628185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.640184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.640203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.652178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.652193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.664184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.664200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.676177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.676191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.688176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.688187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.700174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.700184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.712176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.712185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.724179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.724193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.736177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.736186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.748173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.748184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.760177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.760192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.772176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.772203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.784173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.784199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.796174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.796198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.808174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.808200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 [2024-12-10 00:18:42.820176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.820211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:07.920 Running I/O for 5 seconds... 00:37:07.920 [2024-12-10 00:18:42.838548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:07.920 [2024-12-10 00:18:42.838574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:42.853861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:42.853881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:42.868644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:42.868662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:42.884408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:42.884427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:42.896925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:42.896943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:42.909984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:42.910002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:42.925061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:42.925080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:42.940504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:42.940522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:42.956423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:42.956441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:42.969469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:42.969487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:42.980894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:42.980912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:42.995943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:42.995962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:43.007397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:43.007420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:43.021806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:43.021828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:43.036576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:43.036593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:43.052046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:43.052066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:43.066508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:43.066526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:43.081378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:43.081398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:43.096315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:43.096335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.186 [2024-12-10 00:18:43.109321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.186 [2024-12-10 00:18:43.109340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.124669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.124686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.139747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.139766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.151194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.151214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.166697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.166716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.181474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.181493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.196486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.196505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.212188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.212208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.225387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.225406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.240788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.240807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.256818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.256837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.272441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.272461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.284436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.284454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.297998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.298017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.313611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.313630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.328824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.328846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.344723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.344741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.360027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.360047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.455 [2024-12-10 00:18:43.373980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.455 [2024-12-10 00:18:43.373998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.730 [2024-12-10 00:18:43.389726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.389745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.405118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.405137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.420210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.420229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.433094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.433112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.448530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.448548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.464330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.464351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.477181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.477215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.492316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.492335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.503377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.503395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.518312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.518331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.533232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.533250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.548236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.548256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.560287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.560305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.573785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.573803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.590045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.590064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.605355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.605373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.620415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.620433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.631026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.631044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:08.731 [2024-12-10 00:18:43.646073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:08.731 [2024-12-10 00:18:43.646091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.661521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.661539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.676342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.676361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.687687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.687706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.702300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.702319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.717509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.717528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.732550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.732567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.748919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.748938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.764344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.764363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.776782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.776800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.789610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.789628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.804781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.804799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.819803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.819821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 16280.00 IOPS, 127.19 MiB/s [2024-12-09T23:18:43.961Z] [2024-12-10 00:18:43.833155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.833186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.848272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.848290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.859686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.859704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.874105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.874123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.889026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.889045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.904224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.904243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.915982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.916000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.929570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.929588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.025 [2024-12-10 00:18:43.945142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.025 [2024-12-10 00:18:43.945170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:43.960305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:43.960323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:43.972822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:43.972840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:43.985690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:43.985709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.001146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.001170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.015955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.015974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.029017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.029035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.044521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.044539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.060116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.060135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.073031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.073048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.088416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.088434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.100507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.100529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.115809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.115829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.129172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.129191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.140363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.140381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.154092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.154111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.169209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.169227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.179828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.179852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.194276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.194294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.291 [2024-12-10 00:18:44.209547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.291 [2024-12-10 00:18:44.209565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.224883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.224901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.239849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.239868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.251613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.251631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.266203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.266221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.281373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.281392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.295804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.295822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.310747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.310766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.325626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.325643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.340878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.340896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.355518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.355537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.370209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.370232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.385317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.385335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.400401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.400429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.413182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.413215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.428330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.428349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.438620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.438638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.454020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.454038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.468938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.468956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.556 [2024-12-10 00:18:44.484944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.556 [2024-12-10 00:18:44.484962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.500384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.500403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.514352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.514371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.529999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.530017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.544837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.544855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.560442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.560461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.573544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.573562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.588619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.588637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.604717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.604735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.620429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.620450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.631300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.631321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.646307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.646331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.661330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.661348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.676908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.676927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.691879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.691899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.705007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.705026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.717784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.717802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.732826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.732845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:09.829 [2024-12-10 00:18:44.748095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:09.829 [2024-12-10 00:18:44.748115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.761627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.761646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.776585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.776604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.792247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.792266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.805981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.806001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.821183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.821202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.835289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.835310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 16309.00 IOPS, 127.41 MiB/s [2024-12-09T23:18:45.040Z] [2024-12-10 00:18:44.849663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.849684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.864511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.864530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.880815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.880834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.896373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.896393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.907698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.907716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.921841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.921861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.936832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.104 [2024-12-10 00:18:44.936851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.104 [2024-12-10 00:18:44.951846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.105 [2024-12-10 00:18:44.951866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.105 [2024-12-10 00:18:44.966380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.105 [2024-12-10 00:18:44.966400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.105 [2024-12-10 00:18:44.981197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.105 [2024-12-10 00:18:44.981217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.105 [2024-12-10 00:18:44.996076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.105 [2024-12-10 00:18:44.996095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.105 [2024-12-10 00:18:45.009225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.105 [2024-12-10 00:18:45.009244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.105 [2024-12-10 00:18:45.024407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.105 [2024-12-10 00:18:45.024426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.373 [2024-12-10 00:18:45.036736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.373 [2024-12-10 00:18:45.036756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.373 [2024-12-10 00:18:45.049360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.373 [2024-12-10 00:18:45.049380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.373 [2024-12-10 00:18:45.060765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.373 [2024-12-10 00:18:45.060784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.373 [2024-12-10 00:18:45.073482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.373 [2024-12-10 00:18:45.073501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.373 [2024-12-10 00:18:45.088668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.373 [2024-12-10 00:18:45.088687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.373 [2024-12-10 00:18:45.104683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.373 [2024-12-10 00:18:45.104701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.373 [2024-12-10 00:18:45.119583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.373 [2024-12-10 00:18:45.119602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.373 [2024-12-10 00:18:45.131206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.131225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.374 [2024-12-10 00:18:45.146461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.146482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.374 [2024-12-10 00:18:45.161533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.161552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.374 [2024-12-10 00:18:45.177072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.177091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.374 [2024-12-10 00:18:45.192512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.192530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.374 [2024-12-10 00:18:45.207783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.207802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.374 [2024-12-10 00:18:45.221785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.221803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.374 [2024-12-10 00:18:45.237007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.237026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.374 [2024-12-10 00:18:45.252267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.252292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.374 [2024-12-10 00:18:45.266128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.266146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.374 [2024-12-10 00:18:45.281469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.281489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.374 [2024-12-10 00:18:45.296135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.374 [2024-12-10 00:18:45.296155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.310263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.310283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.325663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.325682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.340885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.340904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.356375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.356394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.367023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.367043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.382377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.382397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.397361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.397381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.412756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.412775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.428071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.428097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.441850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.441870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.457124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.457144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.472479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.472498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.488517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.488536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.503448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.503467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.517030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.517049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.532292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.532312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.543980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.543999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.642 [2024-12-10 00:18:45.558598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.642 [2024-12-10 00:18:45.558617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.573768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.573787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.588776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.588794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.604034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.604054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.616165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.616200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.629785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.629804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.645064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.645083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.660056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.660074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.671780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.671800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.686539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.686558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.701356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.701375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.716162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.716181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.729993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.730018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.745356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.745375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.760674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.760692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.772679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.772697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.787558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.787577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.801875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.801894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.816756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.816774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.832246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.832265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 16349.00 IOPS, 127.73 MiB/s [2024-12-09T23:18:45.868Z] [2024-12-10 00:18:45.846058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.846078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:10.932 [2024-12-10 00:18:45.861770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:10.932 [2024-12-10 00:18:45.861789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:45.876562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:45.876582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:45.892277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:45.892296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:45.906098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:45.906118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:45.921263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:45.921282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:45.936500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:45.936519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:45.951989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:45.952009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:45.966014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:45.966033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:45.980800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:45.980819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:45.996792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:45.996811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:46.008047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:46.008070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:46.022514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:46.022533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:46.037704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:46.037723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:46.053367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:46.053387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:46.068236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:46.068255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:46.079904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:46.079925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:46.094623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:46.094649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:46.110005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:46.110025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.217 [2024-12-10 00:18:46.125163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.217 [2024-12-10 00:18:46.125183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.140556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.140575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.156323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.156343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.168290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.168309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.182141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.182169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.198040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.198060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.213105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.213125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.228908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.228926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.241199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.241218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.256952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.256972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.272409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.272429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.285218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.285242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.300951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.300971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.315765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.315785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.330048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.330067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.345075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.345094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.359928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.359947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.370678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.370698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.386053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.386073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.401125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.401145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.499 [2024-12-10 00:18:46.416068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.499 [2024-12-10 00:18:46.416088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.429836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.429856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.445211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.445231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.460799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.460818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.475858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.475877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.489904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.489923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.505088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.505108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.520724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.520743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.536092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.536112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.549280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.549300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.564035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.564054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.575787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.575806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.590473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.590491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.605827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.605845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.621007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.621025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.636056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.636075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.649610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.649628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.664818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.664837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.680450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.680469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.695835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.695855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:11.790 [2024-12-10 00:18:46.709897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:11.790 [2024-12-10 00:18:46.709917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.080 [2024-12-10 00:18:46.725667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.725687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.741013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.741033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.755997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.756016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.769027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.769046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.784386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.784406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.797462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.797481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.812711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.812730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.826208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.826228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 16344.25 IOPS, 127.69 MiB/s [2024-12-09T23:18:47.017Z] [2024-12-10 00:18:46.841449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.841468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.856208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.856228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.867915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.867934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.882227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.882246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.897327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.897347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.912390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.912419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.925638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.925658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.940648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.940666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.956395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.956425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.968889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.968907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.983798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.983819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.081 [2024-12-10 00:18:46.997579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.081 [2024-12-10 00:18:46.997599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.012671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.012689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.027774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.027793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.041068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.041088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.056944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.056963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.072890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.072909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.088601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.088620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.104336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.104362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.117580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.117600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.132367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.132387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.143184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.143203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.158189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.158209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.173233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.173251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.188510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.188529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.200139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.200166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.214332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.214352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.229094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.229114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.243724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.243743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.257629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.257648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.272658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.272676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.289021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.289040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.387 [2024-12-10 00:18:47.304007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.387 [2024-12-10 00:18:47.304027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.315836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.315856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.330110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.330131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.344839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.344858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.359666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.359686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.373491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.373518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.388135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.388154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.399023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.399042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.413897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.413916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.428531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.428549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.445272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.445292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.460043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.460063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.473364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.473382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.488967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.488985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.505048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.505068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.519810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.519829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.534495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.534515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.549643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.549663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.564388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.564409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.576909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.576929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.677 [2024-12-10 00:18:47.592280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.677 [2024-12-10 00:18:47.592300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.603131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.603151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.618029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.618048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.633345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.633365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.648501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.648525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.664438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.664458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.675408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.675428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.690452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.690471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.705651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.705670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.720814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.720834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.735976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.735996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.747392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.747411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.762431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.762451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.951 [2024-12-10 00:18:47.777340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.951 [2024-12-10 00:18:47.777361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.952 [2024-12-10 00:18:47.792197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.952 [2024-12-10 00:18:47.792217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.952 [2024-12-10 00:18:47.803036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.952 [2024-12-10 00:18:47.803055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.952 [2024-12-10 00:18:47.817773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.952 [2024-12-10 00:18:47.817792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.952 [2024-12-10 00:18:47.833076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.952 [2024-12-10 00:18:47.833095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.952 16372.00 IOPS, 127.91 MiB/s [2024-12-09T23:18:47.888Z] [2024-12-10 00:18:47.847588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.952 [2024-12-10 00:18:47.847607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.952 00:37:12.952 Latency(us) 00:37:12.952 [2024-12-09T23:18:47.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:12.952 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:37:12.952 Nvme1n1 : 5.01 16372.14 127.91 0.00 0.00 7809.92 2065.81 13107.20 00:37:12.952 [2024-12-09T23:18:47.888Z] =================================================================================================================== 00:37:12.952 [2024-12-09T23:18:47.888Z] Total : 16372.14 127.91 0.00 0.00 7809.92 2065.81 13107.20 00:37:12.952 [2024-12-10 00:18:47.856181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.952 [2024-12-10 00:18:47.856216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.952 [2024-12-10 00:18:47.868182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.952 [2024-12-10 00:18:47.868198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:12.952 [2024-12-10 00:18:47.880193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:12.952 [2024-12-10 00:18:47.880213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.211 [2024-12-10 00:18:47.892185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:13.211 [2024-12-10 00:18:47.892203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.211 [2024-12-10 00:18:47.904182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:13.211 [2024-12-10 00:18:47.904213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.211 [2024-12-10 00:18:47.916179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:13.212 [2024-12-10 00:18:47.916193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.212 [2024-12-10 00:18:47.928197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:13.212 [2024-12-10 00:18:47.928213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.212 [2024-12-10 00:18:47.940181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:13.212 [2024-12-10 00:18:47.940194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.212 [2024-12-10 00:18:47.952179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:13.212 [2024-12-10 00:18:47.952211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.212 [2024-12-10 00:18:47.964175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:13.212 [2024-12-10 00:18:47.964186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.212 [2024-12-10 00:18:47.976178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:13.212 [2024-12-10 00:18:47.976190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.212 [2024-12-10 00:18:47.988178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:13.212 [2024-12-10 00:18:47.988207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.212 [2024-12-10 00:18:48.000174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:13.212 [2024-12-10 00:18:48.000200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.212 [2024-12-10 00:18:48.012175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:37:13.212 [2024-12-10 00:18:48.012185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:13.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (588864) - No such process 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 588864 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:13.212 delay0 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.212 00:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:37:13.471 [2024-12-10 00:18:48.158842] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:21.592 Initializing NVMe Controllers 00:37:21.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:21.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:21.592 Initialization complete. Launching workers. 00:37:21.592 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 239, failed: 28103 00:37:21.592 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 28232, failed to submit 110 00:37:21.592 success 28120, unsuccessful 112, failed 0 00:37:21.592 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:37:21.592 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:37:21.592 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:21.592 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:37:21.592 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:21.593 rmmod nvme_tcp 00:37:21.593 rmmod nvme_fabrics 00:37:21.593 rmmod nvme_keyring 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 587027 ']' 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 587027 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 587027 ']' 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 587027 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 587027 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 587027' 00:37:21.593 killing process with pid 587027 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 587027 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 587027 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:21.593 00:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:22.972 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:22.972 00:37:22.972 real 0m32.408s 00:37:22.972 user 0m41.563s 00:37:22.972 sys 0m13.370s 00:37:22.972 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:22.972 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:37:22.972 ************************************ 00:37:22.972 END TEST nvmf_zcopy 00:37:22.972 ************************************ 00:37:22.972 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:22.972 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:22.972 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:22.972 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:22.972 ************************************ 00:37:22.972 START TEST nvmf_nmic 00:37:22.972 ************************************ 00:37:22.972 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:37:22.972 * Looking for test storage... 00:37:22.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:37:22.972 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:23.232 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:23.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.232 --rc genhtml_branch_coverage=1 00:37:23.233 --rc genhtml_function_coverage=1 00:37:23.233 --rc genhtml_legend=1 00:37:23.233 --rc geninfo_all_blocks=1 00:37:23.233 --rc geninfo_unexecuted_blocks=1 00:37:23.233 00:37:23.233 ' 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:23.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.233 --rc genhtml_branch_coverage=1 00:37:23.233 --rc genhtml_function_coverage=1 00:37:23.233 --rc genhtml_legend=1 00:37:23.233 --rc geninfo_all_blocks=1 00:37:23.233 --rc geninfo_unexecuted_blocks=1 00:37:23.233 00:37:23.233 ' 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:23.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.233 --rc genhtml_branch_coverage=1 00:37:23.233 --rc genhtml_function_coverage=1 00:37:23.233 --rc genhtml_legend=1 00:37:23.233 --rc geninfo_all_blocks=1 00:37:23.233 --rc geninfo_unexecuted_blocks=1 00:37:23.233 00:37:23.233 ' 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:23.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:23.233 --rc genhtml_branch_coverage=1 00:37:23.233 --rc genhtml_function_coverage=1 00:37:23.233 --rc genhtml_legend=1 00:37:23.233 --rc geninfo_all_blocks=1 00:37:23.233 --rc geninfo_unexecuted_blocks=1 00:37:23.233 00:37:23.233 ' 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:23.233 00:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:37:23.233 00:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:29.807 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:29.807 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:29.807 Found net devices under 0000:86:00.0: cvl_0_0 00:37:29.807 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:29.808 Found net devices under 0000:86:00.1: cvl_0_1 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:29.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:29.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:37:29.808 00:37:29.808 --- 10.0.0.2 ping statistics --- 00:37:29.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.808 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:29.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:29.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:37:29.808 00:37:29.808 --- 10.0.0.1 ping statistics --- 00:37:29.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:29.808 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=594405 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 594405 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 594405 ']' 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:29.808 00:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.808 [2024-12-10 00:19:03.938482] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:29.808 [2024-12-10 00:19:03.939429] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:37:29.808 [2024-12-10 00:19:03.939462] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:29.808 [2024-12-10 00:19:04.018577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:29.808 [2024-12-10 00:19:04.060970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:29.808 [2024-12-10 00:19:04.061006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:29.808 [2024-12-10 00:19:04.061013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:29.808 [2024-12-10 00:19:04.061019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:29.808 [2024-12-10 00:19:04.061024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:29.808 [2024-12-10 00:19:04.062456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.808 [2024-12-10 00:19:04.062568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:29.808 [2024-12-10 00:19:04.062672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.808 [2024-12-10 00:19:04.062673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:29.808 [2024-12-10 00:19:04.130751] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:29.808 [2024-12-10 00:19:04.131598] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:29.808 [2024-12-10 00:19:04.131705] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:29.808 [2024-12-10 00:19:04.131836] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:29.808 [2024-12-10 00:19:04.131899] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.808 [2024-12-10 00:19:04.199511] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.808 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.808 Malloc0 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.809 [2024-12-10 00:19:04.279569] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:37:29.809 test case1: single bdev can't be used in multiple subsystems 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.809 [2024-12-10 00:19:04.311125] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:37:29.809 [2024-12-10 00:19:04.311146] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:37:29.809 [2024-12-10 00:19:04.311154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:37:29.809 request: 00:37:29.809 { 00:37:29.809 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:37:29.809 "namespace": { 00:37:29.809 "bdev_name": "Malloc0", 00:37:29.809 "no_auto_visible": false, 00:37:29.809 "hide_metadata": false 00:37:29.809 }, 00:37:29.809 "method": "nvmf_subsystem_add_ns", 00:37:29.809 "req_id": 1 00:37:29.809 } 00:37:29.809 Got JSON-RPC error response 00:37:29.809 response: 00:37:29.809 { 00:37:29.809 "code": -32602, 00:37:29.809 "message": "Invalid parameters" 00:37:29.809 } 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:37:29.809 Adding namespace failed - expected result. 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:37:29.809 test case2: host connect to nvmf target in multiple paths 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:29.809 [2024-12-10 00:19:04.323225] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:29.809 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:37:30.069 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:37:30.069 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:37:30.069 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:30.069 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:37:30.069 00:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:37:32.605 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:32.605 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:32.605 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:32.605 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:37:32.605 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:32.605 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:37:32.605 00:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:37:32.605 [global] 00:37:32.605 thread=1 00:37:32.605 invalidate=1 00:37:32.605 rw=write 00:37:32.605 time_based=1 00:37:32.605 runtime=1 00:37:32.605 ioengine=libaio 00:37:32.605 direct=1 00:37:32.605 bs=4096 00:37:32.605 iodepth=1 00:37:32.605 norandommap=0 00:37:32.605 numjobs=1 00:37:32.605 00:37:32.605 verify_dump=1 00:37:32.605 verify_backlog=512 00:37:32.605 verify_state_save=0 00:37:32.605 do_verify=1 00:37:32.605 verify=crc32c-intel 00:37:32.605 [job0] 00:37:32.605 filename=/dev/nvme0n1 00:37:32.605 Could not set queue depth (nvme0n1) 00:37:32.605 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:32.605 fio-3.35 00:37:32.605 Starting 1 thread 00:37:33.542 00:37:33.542 job0: (groupid=0, jobs=1): err= 0: pid=595087: Tue Dec 10 00:19:08 2024 00:37:33.542 read: IOPS=22, BW=90.0KiB/s (92.2kB/s)(92.0KiB/1022msec) 00:37:33.542 slat (nsec): min=10259, max=23535, avg=20735.91, stdev=2385.38 00:37:33.542 clat (usec): min=40449, max=41081, avg=40946.17, stdev=127.09 00:37:33.542 lat (usec): min=40459, max=41102, avg=40966.91, stdev=128.96 00:37:33.542 clat percentiles (usec): 00:37:33.542 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:37:33.542 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:33.542 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:33.542 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:33.542 | 99.99th=[41157] 00:37:33.542 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:37:33.542 slat (nsec): min=10443, max=38069, avg=11309.32, stdev=1558.38 00:37:33.542 clat (usec): min=130, max=228, avg=140.46, stdev= 7.86 00:37:33.542 lat (usec): min=141, max=253, avg=151.77, stdev= 8.51 00:37:33.542 clat percentiles (usec): 00:37:33.542 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 137], 00:37:33.542 | 30.00th=[ 139], 40.00th=[ 139], 50.00th=[ 139], 60.00th=[ 141], 00:37:33.542 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 145], 95.00th=[ 151], 00:37:33.542 | 99.00th=[ 161], 99.50th=[ 217], 99.90th=[ 229], 99.95th=[ 229], 00:37:33.542 | 99.99th=[ 229] 00:37:33.542 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:37:33.542 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:33.542 lat (usec) : 250=95.70% 00:37:33.542 lat (msec) : 50=4.30% 00:37:33.542 cpu : usr=0.29%, sys=0.59%, ctx=535, majf=0, minf=1 00:37:33.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:33.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:33.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:33.542 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:33.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:33.542 00:37:33.542 Run status group 0 (all jobs): 00:37:33.542 READ: bw=90.0KiB/s (92.2kB/s), 90.0KiB/s-90.0KiB/s (92.2kB/s-92.2kB/s), io=92.0KiB (94.2kB), run=1022-1022msec 00:37:33.542 WRITE: bw=2004KiB/s (2052kB/s), 2004KiB/s-2004KiB/s (2052kB/s-2052kB/s), io=2048KiB (2097kB), run=1022-1022msec 00:37:33.542 00:37:33.542 Disk stats (read/write): 00:37:33.542 nvme0n1: ios=70/512, merge=0/0, ticks=841/67, in_queue=908, util=91.18% 00:37:33.543 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:37:33.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:33.806 rmmod nvme_tcp 00:37:33.806 rmmod nvme_fabrics 00:37:33.806 rmmod nvme_keyring 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 594405 ']' 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 594405 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 594405 ']' 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 594405 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 594405 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 594405' 00:37:33.806 killing process with pid 594405 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 594405 00:37:33.806 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 594405 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:34.065 00:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.617 00:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:36.617 00:37:36.617 real 0m13.177s 00:37:36.617 user 0m24.272s 00:37:36.617 sys 0m6.057s 00:37:36.617 00:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.617 00:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:37:36.617 ************************************ 00:37:36.617 END TEST nvmf_nmic 00:37:36.617 ************************************ 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:36.617 ************************************ 00:37:36.617 START TEST nvmf_fio_target 00:37:36.617 ************************************ 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:37:36.617 * Looking for test storage... 00:37:36.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:36.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.617 --rc genhtml_branch_coverage=1 00:37:36.617 --rc genhtml_function_coverage=1 00:37:36.617 --rc genhtml_legend=1 00:37:36.617 --rc geninfo_all_blocks=1 00:37:36.617 --rc geninfo_unexecuted_blocks=1 00:37:36.617 00:37:36.617 ' 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:36.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.617 --rc genhtml_branch_coverage=1 00:37:36.617 --rc genhtml_function_coverage=1 00:37:36.617 --rc genhtml_legend=1 00:37:36.617 --rc geninfo_all_blocks=1 00:37:36.617 --rc geninfo_unexecuted_blocks=1 00:37:36.617 00:37:36.617 ' 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:36.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.617 --rc genhtml_branch_coverage=1 00:37:36.617 --rc genhtml_function_coverage=1 00:37:36.617 --rc genhtml_legend=1 00:37:36.617 --rc geninfo_all_blocks=1 00:37:36.617 --rc geninfo_unexecuted_blocks=1 00:37:36.617 00:37:36.617 ' 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:36.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:36.617 --rc genhtml_branch_coverage=1 00:37:36.617 --rc genhtml_function_coverage=1 00:37:36.617 --rc genhtml_legend=1 00:37:36.617 --rc geninfo_all_blocks=1 00:37:36.617 --rc geninfo_unexecuted_blocks=1 00:37:36.617 00:37:36.617 ' 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.617 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:37:36.618 00:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:37:43.192 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:43.193 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:43.193 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:43.193 Found net devices under 0000:86:00.0: cvl_0_0 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:43.193 Found net devices under 0000:86:00.1: cvl_0_1 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:43.193 00:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:43.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:43.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:37:43.193 00:37:43.193 --- 10.0.0.2 ping statistics --- 00:37:43.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.193 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:43.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:43.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:37:43.193 00:37:43.193 --- 10.0.0.1 ping statistics --- 00:37:43.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:43.193 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:43.193 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=598850 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 598850 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 598850 ']' 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:43.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:43.194 00:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:43.194 [2024-12-10 00:19:17.307960] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:43.194 [2024-12-10 00:19:17.308868] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:37:43.194 [2024-12-10 00:19:17.308899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:43.194 [2024-12-10 00:19:17.389220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:43.194 [2024-12-10 00:19:17.430004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:43.194 [2024-12-10 00:19:17.430044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:43.194 [2024-12-10 00:19:17.430052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:43.194 [2024-12-10 00:19:17.430058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:43.194 [2024-12-10 00:19:17.430063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:43.194 [2024-12-10 00:19:17.431619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:43.194 [2024-12-10 00:19:17.431637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:43.194 [2024-12-10 00:19:17.431727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.194 [2024-12-10 00:19:17.431728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:43.194 [2024-12-10 00:19:17.499835] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:43.194 [2024-12-10 00:19:17.500134] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:43.194 [2024-12-10 00:19:17.500576] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:43.194 [2024-12-10 00:19:17.500863] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:43.194 [2024-12-10 00:19:17.500912] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:43.454 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:43.454 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:37:43.454 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:43.454 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:43.454 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:37:43.454 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:43.454 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:43.454 [2024-12-10 00:19:18.352621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.712 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:43.712 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:37:43.712 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:43.971 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:37:43.971 00:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:44.231 00:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:37:44.231 00:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:44.490 00:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:37:44.490 00:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:37:44.749 00:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:44.749 00:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:37:44.749 00:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:45.009 00:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:37:45.009 00:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:45.267 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:37:45.267 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:37:45.526 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:37:45.785 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:45.786 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:45.786 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:37:45.786 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:37:46.044 00:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:46.303 [2024-12-10 00:19:21.016486] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:46.303 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:37:46.562 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:37:46.562 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:37:46.821 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:37:46.821 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:37:46.821 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:37:46.821 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:37:46.821 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:37:46.821 00:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:37:49.357 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:37:49.357 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:37:49.357 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:37:49.357 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:37:49.357 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:37:49.357 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:37:49.357 00:19:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:37:49.357 [global] 00:37:49.357 thread=1 00:37:49.357 invalidate=1 00:37:49.357 rw=write 00:37:49.357 time_based=1 00:37:49.357 runtime=1 00:37:49.357 ioengine=libaio 00:37:49.358 direct=1 00:37:49.358 bs=4096 00:37:49.358 iodepth=1 00:37:49.358 norandommap=0 00:37:49.358 numjobs=1 00:37:49.358 00:37:49.358 verify_dump=1 00:37:49.358 verify_backlog=512 00:37:49.358 verify_state_save=0 00:37:49.358 do_verify=1 00:37:49.358 verify=crc32c-intel 00:37:49.358 [job0] 00:37:49.358 filename=/dev/nvme0n1 00:37:49.358 [job1] 00:37:49.358 filename=/dev/nvme0n2 00:37:49.358 [job2] 00:37:49.358 filename=/dev/nvme0n3 00:37:49.358 [job3] 00:37:49.358 filename=/dev/nvme0n4 00:37:49.358 Could not set queue depth (nvme0n1) 00:37:49.358 Could not set queue depth (nvme0n2) 00:37:49.358 Could not set queue depth (nvme0n3) 00:37:49.358 Could not set queue depth (nvme0n4) 00:37:49.358 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:49.358 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:49.358 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:49.358 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:49.358 fio-3.35 00:37:49.358 Starting 4 threads 00:37:50.735 00:37:50.735 job0: (groupid=0, jobs=1): err= 0: pid=599980: Tue Dec 10 00:19:25 2024 00:37:50.735 read: IOPS=554, BW=2220KiB/s (2273kB/s)(2224KiB/1002msec) 00:37:50.735 slat (nsec): min=7152, max=26007, avg=8754.33, stdev=2839.40 00:37:50.735 clat (usec): min=186, max=41396, avg=1436.47, stdev=6906.75 00:37:50.735 lat (usec): min=193, max=41405, avg=1445.22, stdev=6907.55 00:37:50.735 clat percentiles (usec): 00:37:50.735 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:37:50.735 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 212], 60.00th=[ 215], 00:37:50.735 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 229], 95.00th=[ 241], 00:37:50.735 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:50.735 | 99.99th=[41157] 00:37:50.735 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:37:50.735 slat (nsec): min=10554, max=43163, avg=11966.35, stdev=1889.63 00:37:50.735 clat (usec): min=135, max=1746, avg=176.36, stdev=64.72 00:37:50.735 lat (usec): min=147, max=1759, avg=188.32, stdev=64.85 00:37:50.735 clat percentiles (usec): 00:37:50.735 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:37:50.735 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 176], 00:37:50.735 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 206], 00:37:50.735 | 99.00th=[ 231], 99.50th=[ 258], 99.90th=[ 1385], 99.95th=[ 1745], 00:37:50.735 | 99.99th=[ 1745] 00:37:50.735 bw ( KiB/s): min= 8192, max= 8192, per=45.33%, avg=8192.00, stdev= 0.00, samples=1 00:37:50.735 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:37:50.735 lat (usec) : 250=98.04%, 500=0.76% 00:37:50.735 lat (msec) : 2=0.13%, 50=1.08% 00:37:50.735 cpu : usr=1.40%, sys=2.50%, ctx=1583, majf=0, minf=1 00:37:50.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:50.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.735 issued rwts: total=556,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:50.735 job1: (groupid=0, jobs=1): err= 0: pid=599988: Tue Dec 10 00:19:25 2024 00:37:50.735 read: IOPS=496, BW=1984KiB/s (2032kB/s)(2024KiB/1020msec) 00:37:50.735 slat (nsec): min=6949, max=24913, avg=8304.23, stdev=2786.95 00:37:50.735 clat (usec): min=222, max=41320, avg=1813.95, stdev=7792.29 00:37:50.735 lat (usec): min=230, max=41332, avg=1822.25, stdev=7794.81 00:37:50.735 clat percentiles (usec): 00:37:50.735 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 237], 20.00th=[ 241], 00:37:50.735 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:37:50.735 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 269], 00:37:50.735 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:50.735 | 99.99th=[41157] 00:37:50.735 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:37:50.735 slat (nsec): min=9951, max=62332, avg=11640.51, stdev=3142.25 00:37:50.735 clat (usec): min=144, max=1437, avg=171.97, stdev=57.17 00:37:50.735 lat (usec): min=157, max=1449, avg=183.61, stdev=57.48 00:37:50.735 clat percentiles (usec): 00:37:50.735 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 157], 20.00th=[ 161], 00:37:50.735 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:37:50.735 | 70.00th=[ 174], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:37:50.735 | 99.00th=[ 208], 99.50th=[ 258], 99.90th=[ 1434], 99.95th=[ 1434], 00:37:50.735 | 99.99th=[ 1434] 00:37:50.735 bw ( KiB/s): min= 4096, max= 4096, per=22.67%, avg=4096.00, stdev= 0.00, samples=1 00:37:50.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:50.735 lat (usec) : 250=84.48%, 500=13.46% 00:37:50.735 lat (msec) : 2=0.10%, 20=0.10%, 50=1.87% 00:37:50.735 cpu : usr=0.88%, sys=1.47%, ctx=1019, majf=0, minf=2 00:37:50.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:50.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.735 issued rwts: total=506,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:50.735 job2: (groupid=0, jobs=1): err= 0: pid=599998: Tue Dec 10 00:19:25 2024 00:37:50.735 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:37:50.735 slat (nsec): min=9914, max=25013, avg=21586.95, stdev=2768.75 00:37:50.735 clat (usec): min=40865, max=42021, avg=41013.32, stdev=230.77 00:37:50.735 lat (usec): min=40878, max=42042, avg=41034.91, stdev=230.80 00:37:50.735 clat percentiles (usec): 00:37:50.735 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:50.735 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:50.735 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:50.735 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:50.735 | 99.99th=[42206] 00:37:50.735 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:37:50.735 slat (nsec): min=10039, max=44853, avg=11809.95, stdev=2709.31 00:37:50.735 clat (usec): min=147, max=1794, avg=187.86, stdev=73.38 00:37:50.735 lat (usec): min=159, max=1804, avg=199.67, stdev=73.55 00:37:50.735 clat percentiles (usec): 00:37:50.735 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:37:50.735 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:37:50.735 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 212], 00:37:50.735 | 99.00th=[ 241], 99.50th=[ 289], 99.90th=[ 1795], 99.95th=[ 1795], 00:37:50.735 | 99.99th=[ 1795] 00:37:50.735 bw ( KiB/s): min= 4096, max= 4096, per=22.67%, avg=4096.00, stdev= 0.00, samples=1 00:37:50.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:50.735 lat (usec) : 250=94.94%, 500=0.75% 00:37:50.735 lat (msec) : 2=0.19%, 50=4.12% 00:37:50.735 cpu : usr=0.40%, sys=0.89%, ctx=535, majf=0, minf=2 00:37:50.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:50.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.735 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:50.735 job3: (groupid=0, jobs=1): err= 0: pid=600004: Tue Dec 10 00:19:25 2024 00:37:50.735 read: IOPS=2092, BW=8372KiB/s (8573kB/s)(8380KiB/1001msec) 00:37:50.735 slat (nsec): min=7316, max=39035, avg=8409.87, stdev=1310.81 00:37:50.735 clat (usec): min=205, max=558, avg=233.49, stdev=21.90 00:37:50.735 lat (usec): min=214, max=567, avg=241.90, stdev=22.02 00:37:50.735 clat percentiles (usec): 00:37:50.735 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 219], 00:37:50.735 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 235], 00:37:50.735 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:37:50.735 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 482], 99.95th=[ 545], 00:37:50.735 | 99.99th=[ 562] 00:37:50.735 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:37:50.735 slat (nsec): min=10120, max=43744, avg=11295.32, stdev=1724.31 00:37:50.735 clat (usec): min=130, max=1366, avg=176.11, stdev=38.92 00:37:50.735 lat (usec): min=153, max=1376, avg=187.41, stdev=39.03 00:37:50.735 clat percentiles (usec): 00:37:50.735 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 149], 20.00th=[ 151], 00:37:50.735 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 169], 60.00th=[ 178], 00:37:50.735 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 262], 00:37:50.735 | 99.00th=[ 277], 99.50th=[ 302], 99.90th=[ 322], 99.95th=[ 363], 00:37:50.735 | 99.99th=[ 1369] 00:37:50.735 bw ( KiB/s): min= 9760, max= 9760, per=54.01%, avg=9760.00, stdev= 0.00, samples=1 00:37:50.735 iops : min= 2440, max= 2440, avg=2440.00, stdev= 0.00, samples=1 00:37:50.735 lat (usec) : 250=89.39%, 500=10.55%, 750=0.04% 00:37:50.735 lat (msec) : 2=0.02% 00:37:50.735 cpu : usr=3.30%, sys=7.90%, ctx=4656, majf=0, minf=2 00:37:50.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:50.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:50.735 issued rwts: total=2095,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:50.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:50.735 00:37:50.735 Run status group 0 (all jobs): 00:37:50.735 READ: bw=12.2MiB/s (12.8MB/s), 87.4KiB/s-8372KiB/s (89.5kB/s-8573kB/s), io=12.4MiB (13.0MB), run=1001-1020msec 00:37:50.735 WRITE: bw=17.6MiB/s (18.5MB/s), 2008KiB/s-9.99MiB/s (2056kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1020msec 00:37:50.735 00:37:50.735 Disk stats (read/write): 00:37:50.735 nvme0n1: ios=600/1024, merge=0/0, ticks=1023/176, in_queue=1199, util=97.80% 00:37:50.735 nvme0n2: ios=516/512, merge=0/0, ticks=802/82, in_queue=884, util=90.95% 00:37:50.735 nvme0n3: ios=18/512, merge=0/0, ticks=739/85, in_queue=824, util=88.83% 00:37:50.735 nvme0n4: ios=1832/2048, merge=0/0, ticks=407/351, in_queue=758, util=89.58% 00:37:50.735 00:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:37:50.735 [global] 00:37:50.735 thread=1 00:37:50.735 invalidate=1 00:37:50.735 rw=randwrite 00:37:50.735 time_based=1 00:37:50.735 runtime=1 00:37:50.735 ioengine=libaio 00:37:50.735 direct=1 00:37:50.735 bs=4096 00:37:50.735 iodepth=1 00:37:50.735 norandommap=0 00:37:50.735 numjobs=1 00:37:50.735 00:37:50.735 verify_dump=1 00:37:50.735 verify_backlog=512 00:37:50.735 verify_state_save=0 00:37:50.736 do_verify=1 00:37:50.736 verify=crc32c-intel 00:37:50.736 [job0] 00:37:50.736 filename=/dev/nvme0n1 00:37:50.736 [job1] 00:37:50.736 filename=/dev/nvme0n2 00:37:50.736 [job2] 00:37:50.736 filename=/dev/nvme0n3 00:37:50.736 [job3] 00:37:50.736 filename=/dev/nvme0n4 00:37:50.736 Could not set queue depth (nvme0n1) 00:37:50.736 Could not set queue depth (nvme0n2) 00:37:50.736 Could not set queue depth (nvme0n3) 00:37:50.736 Could not set queue depth (nvme0n4) 00:37:50.994 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:50.994 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:50.994 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:50.994 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:50.994 fio-3.35 00:37:50.994 Starting 4 threads 00:37:52.374 00:37:52.374 job0: (groupid=0, jobs=1): err= 0: pid=600396: Tue Dec 10 00:19:26 2024 00:37:52.374 read: IOPS=2287, BW=9151KiB/s (9370kB/s)(9160KiB/1001msec) 00:37:52.374 slat (nsec): min=6330, max=30709, avg=7152.78, stdev=1121.02 00:37:52.374 clat (usec): min=169, max=498, avg=242.30, stdev=21.96 00:37:52.374 lat (usec): min=177, max=506, avg=249.46, stdev=21.96 00:37:52.374 clat percentiles (usec): 00:37:52.374 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 219], 20.00th=[ 239], 00:37:52.374 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:37:52.374 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 258], 00:37:52.374 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[ 437], 99.95th=[ 490], 00:37:52.374 | 99.99th=[ 498] 00:37:52.374 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:37:52.374 slat (nsec): min=8732, max=85431, avg=9798.54, stdev=1798.12 00:37:52.374 clat (usec): min=113, max=513, avg=153.90, stdev=33.48 00:37:52.374 lat (usec): min=122, max=522, avg=163.70, stdev=33.75 00:37:52.374 clat percentiles (usec): 00:37:52.374 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:37:52.374 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:37:52.374 | 70.00th=[ 149], 80.00th=[ 176], 90.00th=[ 215], 95.00th=[ 221], 00:37:52.374 | 99.00th=[ 243], 99.50th=[ 258], 99.90th=[ 306], 99.95th=[ 334], 00:37:52.374 | 99.99th=[ 515] 00:37:52.374 bw ( KiB/s): min=11496, max=11496, per=47.71%, avg=11496.00, stdev= 0.00, samples=1 00:37:52.374 iops : min= 2874, max= 2874, avg=2874.00, stdev= 0.00, samples=1 00:37:52.374 lat (usec) : 250=87.26%, 500=12.72%, 750=0.02% 00:37:52.374 cpu : usr=1.10%, sys=5.50%, ctx=4851, majf=0, minf=1 00:37:52.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.374 issued rwts: total=2290,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:52.374 job1: (groupid=0, jobs=1): err= 0: pid=600410: Tue Dec 10 00:19:26 2024 00:37:52.374 read: IOPS=2264, BW=9059KiB/s (9276kB/s)(9068KiB/1001msec) 00:37:52.374 slat (nsec): min=4658, max=29150, avg=6446.23, stdev=1834.37 00:37:52.374 clat (usec): min=168, max=486, avg=242.63, stdev=21.45 00:37:52.374 lat (usec): min=173, max=494, avg=249.08, stdev=21.74 00:37:52.374 clat percentiles (usec): 00:37:52.374 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 239], 00:37:52.374 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:37:52.374 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 260], 00:37:52.374 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 478], 99.95th=[ 486], 00:37:52.374 | 99.99th=[ 486] 00:37:52.374 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:37:52.374 slat (nsec): min=6496, max=37026, avg=9638.41, stdev=2616.64 00:37:52.374 clat (usec): min=111, max=2006, avg=154.68, stdev=50.22 00:37:52.374 lat (usec): min=119, max=2026, avg=164.32, stdev=50.68 00:37:52.374 clat percentiles (usec): 00:37:52.374 | 1.00th=[ 121], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 133], 00:37:52.374 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:37:52.374 | 70.00th=[ 147], 80.00th=[ 180], 90.00th=[ 212], 95.00th=[ 221], 00:37:52.374 | 99.00th=[ 245], 99.50th=[ 281], 99.90th=[ 408], 99.95th=[ 570], 00:37:52.374 | 99.99th=[ 2008] 00:37:52.374 bw ( KiB/s): min=11376, max=11376, per=47.21%, avg=11376.00, stdev= 0.00, samples=1 00:37:52.374 iops : min= 2844, max= 2844, avg=2844.00, stdev= 0.00, samples=1 00:37:52.374 lat (usec) : 250=83.07%, 500=16.88%, 750=0.02% 00:37:52.374 lat (msec) : 4=0.02% 00:37:52.374 cpu : usr=3.70%, sys=5.80%, ctx=4831, majf=0, minf=1 00:37:52.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.374 issued rwts: total=2267,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:52.374 job2: (groupid=0, jobs=1): err= 0: pid=600427: Tue Dec 10 00:19:26 2024 00:37:52.374 read: IOPS=22, BW=90.7KiB/s (92.9kB/s)(92.0KiB/1014msec) 00:37:52.374 slat (nsec): min=12891, max=39085, avg=15240.04, stdev=5600.54 00:37:52.374 clat (usec): min=265, max=41049, avg=39197.29, stdev=8487.09 00:37:52.374 lat (usec): min=279, max=41063, avg=39212.53, stdev=8487.39 00:37:52.374 clat percentiles (usec): 00:37:52.374 | 1.00th=[ 265], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:52.374 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:52.374 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:52.374 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:37:52.374 | 99.99th=[41157] 00:37:52.374 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:37:52.374 slat (nsec): min=9788, max=37532, avg=15665.53, stdev=5051.86 00:37:52.374 clat (usec): min=156, max=313, avg=192.74, stdev=18.21 00:37:52.374 lat (usec): min=166, max=328, avg=208.40, stdev=20.71 00:37:52.374 clat percentiles (usec): 00:37:52.374 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 176], 00:37:52.374 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:37:52.374 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 219], 00:37:52.374 | 99.00th=[ 247], 99.50th=[ 273], 99.90th=[ 314], 99.95th=[ 314], 00:37:52.374 | 99.99th=[ 314] 00:37:52.374 bw ( KiB/s): min= 4096, max= 4096, per=17.00%, avg=4096.00, stdev= 0.00, samples=1 00:37:52.374 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:52.374 lat (usec) : 250=94.95%, 500=0.93% 00:37:52.374 lat (msec) : 50=4.11% 00:37:52.374 cpu : usr=0.10%, sys=1.18%, ctx=537, majf=0, minf=1 00:37:52.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.374 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:52.374 job3: (groupid=0, jobs=1): err= 0: pid=600433: Tue Dec 10 00:19:26 2024 00:37:52.374 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:37:52.374 slat (nsec): min=9463, max=23570, avg=22100.64, stdev=2944.38 00:37:52.374 clat (usec): min=40472, max=42046, avg=41038.37, stdev=341.06 00:37:52.374 lat (usec): min=40482, max=42065, avg=41060.47, stdev=341.60 00:37:52.374 clat percentiles (usec): 00:37:52.374 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:37:52.374 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:52.374 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:37:52.374 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:52.374 | 99.99th=[42206] 00:37:52.374 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:37:52.374 slat (nsec): min=8459, max=36456, avg=10541.67, stdev=2449.89 00:37:52.374 clat (usec): min=130, max=336, avg=214.35, stdev=26.75 00:37:52.374 lat (usec): min=140, max=354, avg=224.89, stdev=27.50 00:37:52.374 clat percentiles (usec): 00:37:52.374 | 1.00th=[ 143], 5.00th=[ 169], 10.00th=[ 190], 20.00th=[ 202], 00:37:52.374 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:37:52.374 | 70.00th=[ 223], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 260], 00:37:52.374 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 338], 99.95th=[ 338], 00:37:52.374 | 99.99th=[ 338] 00:37:52.374 bw ( KiB/s): min= 4096, max= 4096, per=17.00%, avg=4096.00, stdev= 0.00, samples=1 00:37:52.374 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:37:52.374 lat (usec) : 250=90.07%, 500=5.81% 00:37:52.374 lat (msec) : 50=4.12% 00:37:52.374 cpu : usr=0.29%, sys=0.49%, ctx=534, majf=0, minf=1 00:37:52.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:52.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:52.374 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:52.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:52.374 00:37:52.374 Run status group 0 (all jobs): 00:37:52.374 READ: bw=17.6MiB/s (18.5MB/s), 86.3KiB/s-9151KiB/s (88.3kB/s-9370kB/s), io=18.0MiB (18.8MB), run=1001-1020msec 00:37:52.374 WRITE: bw=23.5MiB/s (24.7MB/s), 2008KiB/s-9.99MiB/s (2056kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1020msec 00:37:52.374 00:37:52.374 Disk stats (read/write): 00:37:52.374 nvme0n1: ios=2072/2048, merge=0/0, ticks=514/311, in_queue=825, util=86.67% 00:37:52.374 nvme0n2: ios=2057/2048, merge=0/0, ticks=1321/302, in_queue=1623, util=93.40% 00:37:52.374 nvme0n3: ios=44/512, merge=0/0, ticks=1641/95, in_queue=1736, util=93.12% 00:37:52.374 nvme0n4: ios=74/512, merge=0/0, ticks=769/104, in_queue=873, util=95.06% 00:37:52.374 00:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:37:52.374 [global] 00:37:52.374 thread=1 00:37:52.374 invalidate=1 00:37:52.374 rw=write 00:37:52.374 time_based=1 00:37:52.374 runtime=1 00:37:52.374 ioengine=libaio 00:37:52.374 direct=1 00:37:52.374 bs=4096 00:37:52.374 iodepth=128 00:37:52.374 norandommap=0 00:37:52.374 numjobs=1 00:37:52.374 00:37:52.374 verify_dump=1 00:37:52.374 verify_backlog=512 00:37:52.374 verify_state_save=0 00:37:52.374 do_verify=1 00:37:52.374 verify=crc32c-intel 00:37:52.375 [job0] 00:37:52.375 filename=/dev/nvme0n1 00:37:52.375 [job1] 00:37:52.375 filename=/dev/nvme0n2 00:37:52.375 [job2] 00:37:52.375 filename=/dev/nvme0n3 00:37:52.375 [job3] 00:37:52.375 filename=/dev/nvme0n4 00:37:52.375 Could not set queue depth (nvme0n1) 00:37:52.375 Could not set queue depth (nvme0n2) 00:37:52.375 Could not set queue depth (nvme0n3) 00:37:52.375 Could not set queue depth (nvme0n4) 00:37:52.375 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:52.375 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:52.375 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:52.375 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:52.375 fio-3.35 00:37:52.375 Starting 4 threads 00:37:53.757 00:37:53.757 job0: (groupid=0, jobs=1): err= 0: pid=600812: Tue Dec 10 00:19:28 2024 00:37:53.757 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:37:53.757 slat (nsec): min=1311, max=15441k, avg=122478.64, stdev=971879.31 00:37:53.757 clat (usec): min=4533, max=41399, avg=17253.42, stdev=5500.39 00:37:53.757 lat (usec): min=4542, max=41424, avg=17375.90, stdev=5570.30 00:37:53.757 clat percentiles (usec): 00:37:53.757 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[11338], 20.00th=[12125], 00:37:53.757 | 30.00th=[13698], 40.00th=[14615], 50.00th=[16188], 60.00th=[17695], 00:37:53.757 | 70.00th=[19268], 80.00th=[22676], 90.00th=[24511], 95.00th=[27657], 00:37:53.757 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35390], 99.95th=[40109], 00:37:53.757 | 99.99th=[41157] 00:37:53.757 write: IOPS=3823, BW=14.9MiB/s (15.7MB/s)(15.1MiB/1010msec); 0 zone resets 00:37:53.757 slat (usec): min=2, max=13787, avg=126.67, stdev=790.86 00:37:53.757 clat (usec): min=1451, max=56204, avg=17151.85, stdev=11144.43 00:37:53.757 lat (usec): min=1480, max=56214, avg=17278.52, stdev=11220.23 00:37:53.757 clat percentiles (usec): 00:37:53.757 | 1.00th=[ 4621], 5.00th=[ 5407], 10.00th=[ 5735], 20.00th=[ 9241], 00:37:53.757 | 30.00th=[10421], 40.00th=[11994], 50.00th=[13960], 60.00th=[16712], 00:37:53.757 | 70.00th=[21103], 80.00th=[21890], 90.00th=[33424], 95.00th=[45351], 00:37:53.757 | 99.00th=[52691], 99.50th=[53740], 99.90th=[54789], 99.95th=[56361], 00:37:53.757 | 99.99th=[56361] 00:37:53.757 bw ( KiB/s): min=13168, max=16704, per=21.47%, avg=14936.00, stdev=2500.33, samples=2 00:37:53.757 iops : min= 3292, max= 4176, avg=3734.00, stdev=625.08, samples=2 00:37:53.757 lat (msec) : 2=0.04%, 4=0.21%, 10=15.19%, 20=54.50%, 50=28.81% 00:37:53.757 lat (msec) : 100=1.25% 00:37:53.757 cpu : usr=3.67%, sys=4.66%, ctx=357, majf=0, minf=1 00:37:53.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:53.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:53.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:53.758 issued rwts: total=3584,3862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:53.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:53.758 job1: (groupid=0, jobs=1): err= 0: pid=600827: Tue Dec 10 00:19:28 2024 00:37:53.758 read: IOPS=5556, BW=21.7MiB/s (22.8MB/s)(22.6MiB/1043msec) 00:37:53.758 slat (nsec): min=1306, max=5324.7k, avg=82157.45, stdev=480307.74 00:37:53.758 clat (usec): min=6103, max=53402, avg=11477.12, stdev=5868.80 00:37:53.758 lat (usec): min=6109, max=56832, avg=11559.27, stdev=5878.39 00:37:53.758 clat percentiles (usec): 00:37:53.758 | 1.00th=[ 7111], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9241], 00:37:53.758 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10552], 60.00th=[11076], 00:37:53.758 | 70.00th=[11600], 80.00th=[11994], 90.00th=[13173], 95.00th=[14222], 00:37:53.758 | 99.00th=[50070], 99.50th=[50594], 99.90th=[53216], 99.95th=[53216], 00:37:53.758 | 99.99th=[53216] 00:37:53.758 write: IOPS=5890, BW=23.0MiB/s (24.1MB/s)(24.0MiB/1043msec); 0 zone resets 00:37:53.758 slat (usec): min=2, max=20645, avg=79.48, stdev=512.51 00:37:53.758 clat (usec): min=4672, max=16094, avg=10229.58, stdev=1178.74 00:37:53.758 lat (usec): min=4678, max=26141, avg=10309.06, stdev=1244.11 00:37:53.758 clat percentiles (usec): 00:37:53.758 | 1.00th=[ 6718], 5.00th=[ 8356], 10.00th=[ 9372], 20.00th=[ 9765], 00:37:53.758 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:37:53.758 | 70.00th=[10421], 80.00th=[10552], 90.00th=[11207], 95.00th=[12518], 00:37:53.758 | 99.00th=[14222], 99.50th=[14484], 99.90th=[15533], 99.95th=[15533], 00:37:53.758 | 99.99th=[16057] 00:37:53.758 bw ( KiB/s): min=24576, max=24576, per=35.33%, avg=24576.00, stdev= 0.00, samples=2 00:37:53.758 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:37:53.758 lat (msec) : 10=34.83%, 20=64.11%, 50=0.54%, 100=0.53% 00:37:53.758 cpu : usr=4.70%, sys=7.10%, ctx=540, majf=0, minf=1 00:37:53.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:37:53.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:53.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:53.758 issued rwts: total=5795,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:53.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:53.758 job2: (groupid=0, jobs=1): err= 0: pid=600840: Tue Dec 10 00:19:28 2024 00:37:53.758 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:37:53.758 slat (nsec): min=1391, max=71573k, avg=163239.66, stdev=1978765.73 00:37:53.758 clat (msec): min=4, max=111, avg=20.79, stdev=19.46 00:37:53.758 lat (msec): min=4, max=111, avg=20.96, stdev=19.60 00:37:53.758 clat percentiles (msec): 00:37:53.758 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:37:53.758 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 14], 00:37:53.758 | 70.00th=[ 20], 80.00th=[ 29], 90.00th=[ 39], 95.00th=[ 86], 00:37:53.758 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 102], 00:37:53.758 | 99.99th=[ 111] 00:37:53.758 write: IOPS=3965, BW=15.5MiB/s (16.2MB/s)(15.6MiB/1006msec); 0 zone resets 00:37:53.758 slat (usec): min=2, max=28731, avg=88.05, stdev=839.96 00:37:53.758 clat (usec): min=433, max=61259, avg=12658.07, stdev=8678.95 00:37:53.758 lat (usec): min=445, max=67174, avg=12746.13, stdev=8709.20 00:37:53.758 clat percentiles (usec): 00:37:53.758 | 1.00th=[ 2540], 5.00th=[ 6456], 10.00th=[ 7242], 20.00th=[ 8356], 00:37:53.758 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:37:53.758 | 70.00th=[12256], 80.00th=[13960], 90.00th=[16188], 95.00th=[26084], 00:37:53.758 | 99.00th=[61080], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:37:53.758 | 99.99th=[61080] 00:37:53.758 bw ( KiB/s): min=10416, max=20480, per=22.21%, avg=15448.00, stdev=7116.32, samples=2 00:37:53.758 iops : min= 2604, max= 5120, avg=3862.00, stdev=1779.08, samples=2 00:37:53.758 lat (usec) : 500=0.04%, 750=0.01% 00:37:53.758 lat (msec) : 2=0.34%, 4=0.90%, 10=16.69%, 20=65.18%, 50=12.28% 00:37:53.758 lat (msec) : 100=4.53%, 250=0.03% 00:37:53.759 cpu : usr=2.69%, sys=5.37%, ctx=275, majf=0, minf=1 00:37:53.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:53.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:53.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:53.759 issued rwts: total=3584,3989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:53.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:53.759 job3: (groupid=0, jobs=1): err= 0: pid=600843: Tue Dec 10 00:19:28 2024 00:37:53.759 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:37:53.759 slat (nsec): min=1093, max=15264k, avg=115317.78, stdev=882944.57 00:37:53.759 clat (usec): min=2759, max=68784, avg=15634.20, stdev=10398.49 00:37:53.759 lat (usec): min=2765, max=68791, avg=15749.51, stdev=10455.07 00:37:53.759 clat percentiles (usec): 00:37:53.759 | 1.00th=[ 5473], 5.00th=[ 6718], 10.00th=[ 7504], 20.00th=[10290], 00:37:53.759 | 30.00th=[10945], 40.00th=[12125], 50.00th=[12780], 60.00th=[13042], 00:37:53.759 | 70.00th=[15139], 80.00th=[17957], 90.00th=[26346], 95.00th=[37487], 00:37:53.759 | 99.00th=[64750], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:37:53.759 | 99.99th=[68682] 00:37:53.759 write: IOPS=4127, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1004msec); 0 zone resets 00:37:53.759 slat (usec): min=2, max=35271, avg=83.18, stdev=830.52 00:37:53.759 clat (usec): min=730, max=72745, avg=14794.21, stdev=9663.04 00:37:53.759 lat (usec): min=756, max=72753, avg=14877.39, stdev=9715.87 00:37:53.759 clat percentiles (usec): 00:37:53.759 | 1.00th=[ 2999], 5.00th=[ 5473], 10.00th=[ 6325], 20.00th=[ 7373], 00:37:53.759 | 30.00th=[ 8160], 40.00th=[ 9634], 50.00th=[11076], 60.00th=[15270], 00:37:53.759 | 70.00th=[19530], 80.00th=[21627], 90.00th=[23200], 95.00th=[36439], 00:37:53.759 | 99.00th=[52167], 99.50th=[58983], 99.90th=[68682], 99.95th=[68682], 00:37:53.759 | 99.99th=[72877] 00:37:53.759 bw ( KiB/s): min=12504, max=20264, per=23.55%, avg=16384.00, stdev=5487.15, samples=2 00:37:53.759 iops : min= 3126, max= 5066, avg=4096.00, stdev=1371.79, samples=2 00:37:53.759 lat (usec) : 750=0.01%, 1000=0.01% 00:37:53.759 lat (msec) : 2=0.08%, 4=1.23%, 10=31.47%, 20=44.37%, 50=21.15% 00:37:53.759 lat (msec) : 100=1.67% 00:37:53.759 cpu : usr=2.79%, sys=4.89%, ctx=405, majf=0, minf=1 00:37:53.759 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:37:53.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:53.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:53.759 issued rwts: total=4096,4144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:53.759 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:53.759 00:37:53.759 Run status group 0 (all jobs): 00:37:53.759 READ: bw=63.9MiB/s (67.0MB/s), 13.9MiB/s-21.7MiB/s (14.5MB/s-22.8MB/s), io=66.6MiB (69.9MB), run=1004-1043msec 00:37:53.759 WRITE: bw=67.9MiB/s (71.2MB/s), 14.9MiB/s-23.0MiB/s (15.7MB/s-24.1MB/s), io=70.9MiB (74.3MB), run=1004-1043msec 00:37:53.759 00:37:53.759 Disk stats (read/write): 00:37:53.759 nvme0n1: ios=3094/3198, merge=0/0, ticks=52237/52113, in_queue=104350, util=99.90% 00:37:53.759 nvme0n2: ios=5052/5120, merge=0/0, ticks=26955/24361, in_queue=51316, util=92.89% 00:37:53.759 nvme0n3: ios=3131/3247, merge=0/0, ticks=53188/35268, in_queue=88456, util=95.62% 00:37:53.759 nvme0n4: ios=3244/3584, merge=0/0, ticks=47033/51714, in_queue=98747, util=97.48% 00:37:53.759 00:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:37:53.760 [global] 00:37:53.760 thread=1 00:37:53.760 invalidate=1 00:37:53.760 rw=randwrite 00:37:53.760 time_based=1 00:37:53.760 runtime=1 00:37:53.760 ioengine=libaio 00:37:53.760 direct=1 00:37:53.760 bs=4096 00:37:53.760 iodepth=128 00:37:53.760 norandommap=0 00:37:53.760 numjobs=1 00:37:53.760 00:37:53.760 verify_dump=1 00:37:53.760 verify_backlog=512 00:37:53.760 verify_state_save=0 00:37:53.760 do_verify=1 00:37:53.760 verify=crc32c-intel 00:37:53.760 [job0] 00:37:53.760 filename=/dev/nvme0n1 00:37:53.760 [job1] 00:37:53.760 filename=/dev/nvme0n2 00:37:53.760 [job2] 00:37:53.760 filename=/dev/nvme0n3 00:37:53.760 [job3] 00:37:53.760 filename=/dev/nvme0n4 00:37:53.760 Could not set queue depth (nvme0n1) 00:37:53.760 Could not set queue depth (nvme0n2) 00:37:53.760 Could not set queue depth (nvme0n3) 00:37:53.760 Could not set queue depth (nvme0n4) 00:37:54.024 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:54.024 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:54.024 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:54.024 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:37:54.024 fio-3.35 00:37:54.024 Starting 4 threads 00:37:55.404 00:37:55.404 job0: (groupid=0, jobs=1): err= 0: pid=601243: Tue Dec 10 00:19:30 2024 00:37:55.404 read: IOPS=3396, BW=13.3MiB/s (13.9MB/s)(13.9MiB/1051msec) 00:37:55.404 slat (nsec): min=1063, max=27548k, avg=171184.80, stdev=1333279.50 00:37:55.404 clat (msec): min=3, max=101, avg=20.84, stdev=15.89 00:37:55.404 lat (msec): min=3, max=101, avg=21.01, stdev=16.01 00:37:55.404 clat percentiles (msec): 00:37:55.404 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:37:55.404 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 20], 00:37:55.404 | 70.00th=[ 25], 80.00th=[ 28], 90.00th=[ 35], 95.00th=[ 59], 00:37:55.404 | 99.00th=[ 88], 99.50th=[ 100], 99.90th=[ 102], 99.95th=[ 102], 00:37:55.404 | 99.99th=[ 102] 00:37:55.404 write: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(14.0MiB/1051msec); 0 zone resets 00:37:55.404 slat (nsec): min=1761, max=15703k, avg=105083.39, stdev=699332.38 00:37:55.404 clat (usec): min=1516, max=101485, avg=16450.23, stdev=10787.17 00:37:55.404 lat (usec): min=1526, max=101488, avg=16555.32, stdev=10824.55 00:37:55.404 clat percentiles (msec): 00:37:55.404 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:37:55.404 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 16], 00:37:55.404 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 23], 95.00th=[ 32], 00:37:55.404 | 99.00th=[ 64], 99.50th=[ 72], 99.90th=[ 80], 99.95th=[ 102], 00:37:55.404 | 99.99th=[ 102] 00:37:55.404 bw ( KiB/s): min=12240, max=16432, per=20.89%, avg=14336.00, stdev=2964.19, samples=2 00:37:55.404 iops : min= 3060, max= 4108, avg=3584.00, stdev=741.05, samples=2 00:37:55.404 lat (msec) : 2=0.04%, 4=0.11%, 10=14.24%, 20=53.06%, 50=27.26% 00:37:55.404 lat (msec) : 100=5.07%, 250=0.21% 00:37:55.404 cpu : usr=2.19%, sys=3.90%, ctx=291, majf=0, minf=2 00:37:55.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:37:55.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:55.404 issued rwts: total=3570,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.404 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:55.404 job1: (groupid=0, jobs=1): err= 0: pid=601256: Tue Dec 10 00:19:30 2024 00:37:55.404 read: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec) 00:37:55.404 slat (nsec): min=1248, max=9416.0k, avg=84991.56, stdev=709165.63 00:37:55.404 clat (usec): min=3343, max=19424, avg=10596.11, stdev=2514.25 00:37:55.404 lat (usec): min=3350, max=25588, avg=10681.10, stdev=2593.66 00:37:55.404 clat percentiles (usec): 00:37:55.404 | 1.00th=[ 6063], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[ 9372], 00:37:55.404 | 30.00th=[ 9503], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[ 9896], 00:37:55.404 | 70.00th=[10290], 80.00th=[11338], 90.00th=[15008], 95.00th=[16712], 00:37:55.404 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:37:55.404 | 99.99th=[19530] 00:37:55.404 write: IOPS=6459, BW=25.2MiB/s (26.5MB/s)(25.5MiB/1009msec); 0 zone resets 00:37:55.404 slat (usec): min=2, max=8552, avg=67.44, stdev=453.96 00:37:55.404 clat (usec): min=1999, max=19644, avg=9453.03, stdev=2324.59 00:37:55.404 lat (usec): min=2009, max=19652, avg=9520.47, stdev=2347.97 00:37:55.404 clat percentiles (usec): 00:37:55.404 | 1.00th=[ 3654], 5.00th=[ 5997], 10.00th=[ 6456], 20.00th=[ 7635], 00:37:55.404 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10028], 00:37:55.404 | 70.00th=[10290], 80.00th=[10421], 90.00th=[11994], 95.00th=[13304], 00:37:55.404 | 99.00th=[17171], 99.50th=[18744], 99.90th=[19530], 99.95th=[19530], 00:37:55.404 | 99.99th=[19530] 00:37:55.404 bw ( KiB/s): min=24816, max=26312, per=37.25%, avg=25564.00, stdev=1057.83, samples=2 00:37:55.404 iops : min= 6204, max= 6578, avg=6391.00, stdev=264.46, samples=2 00:37:55.404 lat (msec) : 2=0.01%, 4=0.87%, 10=60.40%, 20=38.72% 00:37:55.404 cpu : usr=5.36%, sys=6.65%, ctx=516, majf=0, minf=1 00:37:55.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:37:55.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:55.405 issued rwts: total=6144,6518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:55.405 job2: (groupid=0, jobs=1): err= 0: pid=601273: Tue Dec 10 00:19:30 2024 00:37:55.405 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:37:55.405 slat (nsec): min=1580, max=9517.6k, avg=101044.72, stdev=567152.77 00:37:55.405 clat (usec): min=8476, max=35962, avg=13108.51, stdev=3903.42 00:37:55.405 lat (usec): min=8479, max=35974, avg=13209.56, stdev=3944.26 00:37:55.405 clat percentiles (usec): 00:37:55.405 | 1.00th=[ 9241], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:37:55.405 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12518], 00:37:55.405 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14353], 95.00th=[23462], 00:37:55.405 | 99.00th=[29754], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:37:55.405 | 99.99th=[35914] 00:37:55.405 write: IOPS=4846, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1002msec); 0 zone resets 00:37:55.405 slat (usec): min=2, max=11065, avg=103.51, stdev=612.52 00:37:55.405 clat (usec): min=498, max=39459, avg=13531.92, stdev=5188.39 00:37:55.405 lat (usec): min=3886, max=39493, avg=13635.43, stdev=5243.66 00:37:55.405 clat percentiles (usec): 00:37:55.405 | 1.00th=[ 8094], 5.00th=[10421], 10.00th=[11338], 20.00th=[11600], 00:37:55.405 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:37:55.405 | 70.00th=[12125], 80.00th=[12518], 90.00th=[25822], 95.00th=[28181], 00:37:55.405 | 99.00th=[30802], 99.50th=[30802], 99.90th=[36963], 99.95th=[38011], 00:37:55.405 | 99.99th=[39584] 00:37:55.405 bw ( KiB/s): min=16400, max=21432, per=27.57%, avg=18916.00, stdev=3558.16, samples=2 00:37:55.405 iops : min= 4100, max= 5358, avg=4729.00, stdev=889.54, samples=2 00:37:55.405 lat (usec) : 500=0.01% 00:37:55.405 lat (msec) : 4=0.08%, 10=3.30%, 20=87.81%, 50=8.80% 00:37:55.405 cpu : usr=4.30%, sys=6.19%, ctx=420, majf=0, minf=1 00:37:55.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:37:55.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:55.405 issued rwts: total=4608,4856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:55.405 job3: (groupid=0, jobs=1): err= 0: pid=601278: Tue Dec 10 00:19:30 2024 00:37:55.405 read: IOPS=2865, BW=11.2MiB/s (11.7MB/s)(11.3MiB/1011msec) 00:37:55.405 slat (nsec): min=1434, max=15669k, avg=150803.92, stdev=1127199.98 00:37:55.405 clat (usec): min=5284, max=36960, avg=19160.81, stdev=5672.57 00:37:55.405 lat (usec): min=5290, max=37947, avg=19311.61, stdev=5764.82 00:37:55.405 clat percentiles (usec): 00:37:55.405 | 1.00th=[ 9372], 5.00th=[12125], 10.00th=[12649], 20.00th=[13173], 00:37:55.405 | 30.00th=[14353], 40.00th=[17957], 50.00th=[19006], 60.00th=[20055], 00:37:55.405 | 70.00th=[22152], 80.00th=[22938], 90.00th=[27132], 95.00th=[29230], 00:37:55.405 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:37:55.405 | 99.99th=[36963] 00:37:55.405 write: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec); 0 zone resets 00:37:55.405 slat (usec): min=2, max=20463, avg=170.83, stdev=1143.75 00:37:55.405 clat (usec): min=3779, max=87675, avg=23375.95, stdev=13320.03 00:37:55.405 lat (usec): min=3789, max=87687, avg=23546.78, stdev=13399.69 00:37:55.405 clat percentiles (usec): 00:37:55.405 | 1.00th=[ 7570], 5.00th=[ 9896], 10.00th=[11600], 20.00th=[17695], 00:37:55.405 | 30.00th=[18482], 40.00th=[19006], 50.00th=[21103], 60.00th=[22152], 00:37:55.405 | 70.00th=[22414], 80.00th=[23725], 90.00th=[34341], 95.00th=[55313], 00:37:55.405 | 99.00th=[81265], 99.50th=[84411], 99.90th=[87557], 99.95th=[87557], 00:37:55.405 | 99.99th=[87557] 00:37:55.405 bw ( KiB/s): min=12288, max=12288, per=17.91%, avg=12288.00, stdev= 0.00, samples=2 00:37:55.405 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:37:55.405 lat (msec) : 4=0.20%, 10=3.35%, 20=48.27%, 50=44.76%, 100=3.42% 00:37:55.405 cpu : usr=2.38%, sys=4.26%, ctx=227, majf=0, minf=1 00:37:55.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:37:55.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:55.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:55.405 issued rwts: total=2897,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:55.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:55.405 00:37:55.405 Run status group 0 (all jobs): 00:37:55.405 READ: bw=64.0MiB/s (67.1MB/s), 11.2MiB/s-23.8MiB/s (11.7MB/s-24.9MB/s), io=67.3MiB (70.5MB), run=1002-1051msec 00:37:55.405 WRITE: bw=67.0MiB/s (70.3MB/s), 11.9MiB/s-25.2MiB/s (12.4MB/s-26.5MB/s), io=70.4MiB (73.8MB), run=1002-1051msec 00:37:55.405 00:37:55.405 Disk stats (read/write): 00:37:55.405 nvme0n1: ios=3122/3503, merge=0/0, ticks=43336/45894, in_queue=89230, util=87.17% 00:37:55.405 nvme0n2: ios=5170/5551, merge=0/0, ticks=52004/48890, in_queue=100894, util=90.76% 00:37:55.405 nvme0n3: ios=3841/4096, merge=0/0, ticks=17068/17378, in_queue=34446, util=97.50% 00:37:55.405 nvme0n4: ios=2205/2560, merge=0/0, ticks=38329/56274, in_queue=94603, util=97.27% 00:37:55.405 00:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:37:55.405 00:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=601392 00:37:55.405 00:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:37:55.405 00:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:37:55.405 [global] 00:37:55.405 thread=1 00:37:55.405 invalidate=1 00:37:55.405 rw=read 00:37:55.405 time_based=1 00:37:55.405 runtime=10 00:37:55.405 ioengine=libaio 00:37:55.405 direct=1 00:37:55.405 bs=4096 00:37:55.405 iodepth=1 00:37:55.405 norandommap=1 00:37:55.405 numjobs=1 00:37:55.405 00:37:55.405 [job0] 00:37:55.405 filename=/dev/nvme0n1 00:37:55.405 [job1] 00:37:55.405 filename=/dev/nvme0n2 00:37:55.405 [job2] 00:37:55.405 filename=/dev/nvme0n3 00:37:55.405 [job3] 00:37:55.405 filename=/dev/nvme0n4 00:37:55.405 Could not set queue depth (nvme0n1) 00:37:55.405 Could not set queue depth (nvme0n2) 00:37:55.405 Could not set queue depth (nvme0n3) 00:37:55.405 Could not set queue depth (nvme0n4) 00:37:55.664 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:55.664 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:55.664 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:55.664 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:37:55.664 fio-3.35 00:37:55.664 Starting 4 threads 00:37:58.198 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete concat0 00:37:58.456 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=380928, buflen=4096 00:37:58.456 fio: pid=601687, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:58.456 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete raid0 00:37:58.715 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=40153088, buflen=4096 00:37:58.715 fio: pid=601686, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:58.715 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:58.715 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:37:58.974 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46817280, buflen=4096 00:37:58.974 fio: pid=601684, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:58.974 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:58.974 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:37:59.234 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=47833088, buflen=4096 00:37:59.234 fio: pid=601685, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:37:59.234 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:59.234 00:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:37:59.234 00:37:59.234 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=601684: Tue Dec 10 00:19:34 2024 00:37:59.234 read: IOPS=3620, BW=14.1MiB/s (14.8MB/s)(44.6MiB/3157msec) 00:37:59.234 slat (usec): min=2, max=21604, avg=13.41, stdev=315.18 00:37:59.234 clat (usec): min=179, max=40820, avg=259.51, stdev=537.70 00:37:59.234 lat (usec): min=187, max=40828, avg=272.92, stdev=624.04 00:37:59.234 clat percentiles (usec): 00:37:59.234 | 1.00th=[ 190], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 229], 00:37:59.234 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:37:59.234 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 314], 00:37:59.234 | 99.00th=[ 453], 99.50th=[ 474], 99.90th=[ 515], 99.95th=[ 676], 00:37:59.234 | 99.99th=[40633] 00:37:59.234 bw ( KiB/s): min=13512, max=15608, per=37.18%, avg=14550.67, stdev=739.03, samples=6 00:37:59.234 iops : min= 3378, max= 3902, avg=3637.67, stdev=184.76, samples=6 00:37:59.234 lat (usec) : 250=56.98%, 500=42.74%, 750=0.24%, 1000=0.01% 00:37:59.234 lat (msec) : 50=0.02% 00:37:59.234 cpu : usr=0.95%, sys=3.64%, ctx=11437, majf=0, minf=2 00:37:59.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:59.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.234 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.234 issued rwts: total=11431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:59.234 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=601685: Tue Dec 10 00:19:34 2024 00:37:59.234 read: IOPS=3462, BW=13.5MiB/s (14.2MB/s)(45.6MiB/3373msec) 00:37:59.234 slat (usec): min=2, max=15669, avg=13.79, stdev=263.53 00:37:59.234 clat (usec): min=175, max=41454, avg=271.30, stdev=781.46 00:37:59.234 lat (usec): min=185, max=41463, avg=285.10, stdev=825.68 00:37:59.234 clat percentiles (usec): 00:37:59.234 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 215], 20.00th=[ 229], 00:37:59.234 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:37:59.234 | 70.00th=[ 258], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 318], 00:37:59.234 | 99.00th=[ 453], 99.50th=[ 461], 99.90th=[ 611], 99.95th=[ 8356], 00:37:59.234 | 99.99th=[41157] 00:37:59.234 bw ( KiB/s): min=11128, max=15624, per=34.71%, avg=13585.50, stdev=1709.74, samples=6 00:37:59.234 iops : min= 2782, max= 3906, avg=3396.33, stdev=427.45, samples=6 00:37:59.234 lat (usec) : 250=58.32%, 500=41.48%, 750=0.11% 00:37:59.234 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%, 50=0.04% 00:37:59.234 cpu : usr=1.99%, sys=5.60%, ctx=11683, majf=0, minf=1 00:37:59.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:59.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.234 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.234 issued rwts: total=11679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:59.234 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=601686: Tue Dec 10 00:19:34 2024 00:37:59.234 read: IOPS=3324, BW=13.0MiB/s (13.6MB/s)(38.3MiB/2949msec) 00:37:59.234 slat (usec): min=6, max=14896, avg=12.81, stdev=191.15 00:37:59.234 clat (usec): min=187, max=21068, avg=283.39, stdev=217.90 00:37:59.234 lat (usec): min=195, max=21080, avg=296.19, stdev=290.22 00:37:59.234 clat percentiles (usec): 00:37:59.234 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 245], 00:37:59.234 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 277], 00:37:59.234 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 424], 00:37:59.234 | 99.00th=[ 506], 99.50th=[ 510], 99.90th=[ 523], 99.95th=[ 529], 00:37:59.234 | 99.99th=[21103] 00:37:59.234 bw ( KiB/s): min=12600, max=15016, per=34.48%, avg=13496.00, stdev=903.52, samples=5 00:37:59.234 iops : min= 3150, max= 3754, avg=3374.00, stdev=225.88, samples=5 00:37:59.234 lat (usec) : 250=28.33%, 500=69.62%, 750=2.01% 00:37:59.234 lat (msec) : 2=0.02%, 50=0.01% 00:37:59.234 cpu : usr=2.04%, sys=5.97%, ctx=9808, majf=0, minf=1 00:37:59.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:59.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.234 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.234 issued rwts: total=9804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:59.234 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=601687: Tue Dec 10 00:19:34 2024 00:37:59.234 read: IOPS=34, BW=136KiB/s (139kB/s)(372KiB/2738msec) 00:37:59.234 slat (nsec): min=3078, max=34481, avg=13312.78, stdev=6461.47 00:37:59.234 clat (usec): min=222, max=42033, avg=29200.14, stdev=18538.23 00:37:59.234 lat (usec): min=225, max=42046, avg=29213.39, stdev=18541.87 00:37:59.234 clat percentiles (usec): 00:37:59.234 | 1.00th=[ 223], 5.00th=[ 281], 10.00th=[ 322], 20.00th=[ 416], 00:37:59.234 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:59.234 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:59.234 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:59.234 | 99.99th=[42206] 00:37:59.234 bw ( KiB/s): min= 96, max= 296, per=0.36%, avg=139.20, stdev=87.93, samples=5 00:37:59.234 iops : min= 24, max= 74, avg=34.80, stdev=21.98, samples=5 00:37:59.234 lat (usec) : 250=2.13%, 500=23.40%, 750=3.19% 00:37:59.234 lat (msec) : 50=70.21% 00:37:59.234 cpu : usr=0.00%, sys=0.07%, ctx=95, majf=0, minf=1 00:37:59.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:59.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.234 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.234 issued rwts: total=94,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:59.234 00:37:59.234 Run status group 0 (all jobs): 00:37:59.234 READ: bw=38.2MiB/s (40.1MB/s), 136KiB/s-14.1MiB/s (139kB/s-14.8MB/s), io=129MiB (135MB), run=2738-3373msec 00:37:59.234 00:37:59.234 Disk stats (read/write): 00:37:59.234 nvme0n1: ios=11331/0, merge=0/0, ticks=3086/0, in_queue=3086, util=97.53% 00:37:59.234 nvme0n2: ios=11678/0, merge=0/0, ticks=3018/0, in_queue=3018, util=94.77% 00:37:59.234 nvme0n3: ios=9614/0, merge=0/0, ticks=2721/0, in_queue=2721, util=98.31% 00:37:59.234 nvme0n4: ios=90/0, merge=0/0, ticks=2594/0, in_queue=2594, util=96.44% 00:37:59.494 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:59.494 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:37:59.494 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:59.494 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:37:59.753 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:37:59.753 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:38:00.013 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:38:00.013 00:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 601392 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:00.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:38:00.272 nvmf hotplug test: fio failed as expected 00:38:00.272 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:00.531 rmmod nvme_tcp 00:38:00.531 rmmod nvme_fabrics 00:38:00.531 rmmod nvme_keyring 00:38:00.531 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 598850 ']' 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 598850 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 598850 ']' 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 598850 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 598850 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 598850' 00:38:00.791 killing process with pid 598850 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 598850 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 598850 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:00.791 00:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:03.341 00:38:03.341 real 0m26.701s 00:38:03.341 user 1m30.967s 00:38:03.341 sys 0m11.977s 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:38:03.341 ************************************ 00:38:03.341 END TEST nvmf_fio_target 00:38:03.341 ************************************ 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:03.341 ************************************ 00:38:03.341 START TEST nvmf_bdevio 00:38:03.341 ************************************ 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:38:03.341 * Looking for test storage... 00:38:03.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:03.341 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:38:03.342 00:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:03.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.342 --rc genhtml_branch_coverage=1 00:38:03.342 --rc genhtml_function_coverage=1 00:38:03.342 --rc genhtml_legend=1 00:38:03.342 --rc geninfo_all_blocks=1 00:38:03.342 --rc geninfo_unexecuted_blocks=1 00:38:03.342 00:38:03.342 ' 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:03.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.342 --rc genhtml_branch_coverage=1 00:38:03.342 --rc genhtml_function_coverage=1 00:38:03.342 --rc genhtml_legend=1 00:38:03.342 --rc geninfo_all_blocks=1 00:38:03.342 --rc geninfo_unexecuted_blocks=1 00:38:03.342 00:38:03.342 ' 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:03.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.342 --rc genhtml_branch_coverage=1 00:38:03.342 --rc genhtml_function_coverage=1 00:38:03.342 --rc genhtml_legend=1 00:38:03.342 --rc geninfo_all_blocks=1 00:38:03.342 --rc geninfo_unexecuted_blocks=1 00:38:03.342 00:38:03.342 ' 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:03.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:03.342 --rc genhtml_branch_coverage=1 00:38:03.342 --rc genhtml_function_coverage=1 00:38:03.342 --rc genhtml_legend=1 00:38:03.342 --rc geninfo_all_blocks=1 00:38:03.342 --rc geninfo_unexecuted_blocks=1 00:38:03.342 00:38:03.342 ' 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.342 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:38:03.343 00:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:09.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:09.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:09.920 Found net devices under 0000:86:00.0: cvl_0_0 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:09.920 Found net devices under 0000:86:00.1: cvl_0_1 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:09.920 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:09.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:09.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:38:09.921 00:38:09.921 --- 10.0.0.2 ping statistics --- 00:38:09.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.921 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:09.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:09.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:38:09.921 00:38:09.921 --- 10.0.0.1 ping statistics --- 00:38:09.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:09.921 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=605915 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 605915 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 605915 ']' 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.921 00:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:09.921 [2024-12-10 00:19:43.949249] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:09.921 [2024-12-10 00:19:43.950230] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:09.921 [2024-12-10 00:19:43.950275] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:09.921 [2024-12-10 00:19:44.032601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:09.921 [2024-12-10 00:19:44.074184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:09.921 [2024-12-10 00:19:44.074221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:09.921 [2024-12-10 00:19:44.074228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:09.921 [2024-12-10 00:19:44.074234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:09.921 [2024-12-10 00:19:44.074240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:09.921 [2024-12-10 00:19:44.075708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:09.921 [2024-12-10 00:19:44.075817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:09.921 [2024-12-10 00:19:44.075925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:09.921 [2024-12-10 00:19:44.075926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:09.921 [2024-12-10 00:19:44.144779] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:09.921 [2024-12-10 00:19:44.145079] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:09.921 [2024-12-10 00:19:44.145588] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:09.921 [2024-12-10 00:19:44.145731] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:09.921 [2024-12-10 00:19:44.145802] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:09.921 [2024-12-10 00:19:44.212681] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:09.921 Malloc0 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:09.921 [2024-12-10 00:19:44.292873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:09.921 { 00:38:09.921 "params": { 00:38:09.921 "name": "Nvme$subsystem", 00:38:09.921 "trtype": "$TEST_TRANSPORT", 00:38:09.921 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:09.921 "adrfam": "ipv4", 00:38:09.921 "trsvcid": "$NVMF_PORT", 00:38:09.921 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:09.921 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:09.921 "hdgst": ${hdgst:-false}, 00:38:09.921 "ddgst": ${ddgst:-false} 00:38:09.921 }, 00:38:09.921 "method": "bdev_nvme_attach_controller" 00:38:09.921 } 00:38:09.921 EOF 00:38:09.921 )") 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:38:09.921 00:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:09.921 "params": { 00:38:09.921 "name": "Nvme1", 00:38:09.921 "trtype": "tcp", 00:38:09.921 "traddr": "10.0.0.2", 00:38:09.921 "adrfam": "ipv4", 00:38:09.921 "trsvcid": "4420", 00:38:09.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:09.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:09.921 "hdgst": false, 00:38:09.921 "ddgst": false 00:38:09.921 }, 00:38:09.921 "method": "bdev_nvme_attach_controller" 00:38:09.921 }' 00:38:09.921 [2024-12-10 00:19:44.343123] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:09.922 [2024-12-10 00:19:44.343182] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605945 ] 00:38:09.922 [2024-12-10 00:19:44.417701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:09.922 [2024-12-10 00:19:44.460832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:09.922 [2024-12-10 00:19:44.460938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:09.922 [2024-12-10 00:19:44.460939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:09.922 I/O targets: 00:38:09.922 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:38:09.922 00:38:09.922 00:38:09.922 CUnit - A unit testing framework for C - Version 2.1-3 00:38:09.922 http://cunit.sourceforge.net/ 00:38:09.922 00:38:09.922 00:38:09.922 Suite: bdevio tests on: Nvme1n1 00:38:09.922 Test: blockdev write read block ...passed 00:38:09.922 Test: blockdev write zeroes read block ...passed 00:38:09.922 Test: blockdev write zeroes read no split ...passed 00:38:09.922 Test: blockdev write zeroes read split ...passed 00:38:09.922 Test: blockdev write zeroes read split partial ...passed 00:38:09.922 Test: blockdev reset ...[2024-12-10 00:19:44.761443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:38:09.922 [2024-12-10 00:19:44.761534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20aaf30 (9): Bad file descriptor 00:38:09.922 [2024-12-10 00:19:44.806013] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:38:09.922 passed 00:38:09.922 Test: blockdev write read 8 blocks ...passed 00:38:10.181 Test: blockdev write read size > 128k ...passed 00:38:10.181 Test: blockdev write read invalid size ...passed 00:38:10.181 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:10.181 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:10.181 Test: blockdev write read max offset ...passed 00:38:10.181 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:10.181 Test: blockdev writev readv 8 blocks ...passed 00:38:10.181 Test: blockdev writev readv 30 x 1block ...passed 00:38:10.181 Test: blockdev writev readv block ...passed 00:38:10.181 Test: blockdev writev readv size > 128k ...passed 00:38:10.181 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:10.181 Test: blockdev comparev and writev ...[2024-12-10 00:19:45.056919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:10.181 [2024-12-10 00:19:45.056948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:38:10.181 [2024-12-10 00:19:45.056962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:10.181 [2024-12-10 00:19:45.056971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:38:10.181 [2024-12-10 00:19:45.057268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:10.181 [2024-12-10 00:19:45.057279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:38:10.181 [2024-12-10 00:19:45.057296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:10.181 [2024-12-10 00:19:45.057304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:38:10.181 [2024-12-10 00:19:45.057591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:10.181 [2024-12-10 00:19:45.057601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:38:10.181 [2024-12-10 00:19:45.057613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:10.181 [2024-12-10 00:19:45.057621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:38:10.181 [2024-12-10 00:19:45.057902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:10.181 [2024-12-10 00:19:45.057913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:10.181 [2024-12-10 00:19:45.057925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:38:10.181 [2024-12-10 00:19:45.057933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:38:10.181 passed 00:38:10.439 Test: blockdev nvme passthru rw ...passed 00:38:10.439 Test: blockdev nvme passthru vendor specific ...[2024-12-10 00:19:45.139529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:10.439 [2024-12-10 00:19:45.139547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:38:10.440 [2024-12-10 00:19:45.139661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:10.440 [2024-12-10 00:19:45.139671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:38:10.440 [2024-12-10 00:19:45.139785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:10.440 [2024-12-10 00:19:45.139794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:38:10.440 [2024-12-10 00:19:45.139903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:38:10.440 [2024-12-10 00:19:45.139912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:38:10.440 passed 00:38:10.440 Test: blockdev nvme admin passthru ...passed 00:38:10.440 Test: blockdev copy ...passed 00:38:10.440 00:38:10.440 Run Summary: Type Total Ran Passed Failed Inactive 00:38:10.440 suites 1 1 n/a 0 0 00:38:10.440 tests 23 23 23 0 0 00:38:10.440 asserts 152 152 152 0 n/a 00:38:10.440 00:38:10.440 Elapsed time = 1.088 seconds 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:10.440 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:10.440 rmmod nvme_tcp 00:38:10.440 rmmod nvme_fabrics 00:38:10.699 rmmod nvme_keyring 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 605915 ']' 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 605915 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 605915 ']' 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 605915 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 605915 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 605915' 00:38:10.699 killing process with pid 605915 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 605915 00:38:10.699 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 605915 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:10.959 00:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:12.879 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:12.879 00:38:12.879 real 0m9.904s 00:38:12.879 user 0m8.526s 00:38:12.879 sys 0m5.281s 00:38:12.879 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:12.879 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:38:12.879 ************************************ 00:38:12.879 END TEST nvmf_bdevio 00:38:12.879 ************************************ 00:38:12.879 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:38:12.879 00:38:12.879 real 4m33.371s 00:38:12.879 user 9m7.346s 00:38:12.879 sys 1m52.799s 00:38:12.879 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:12.879 00:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:12.879 ************************************ 00:38:12.879 END TEST nvmf_target_core_interrupt_mode 00:38:12.879 ************************************ 00:38:12.879 00:19:47 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:12.879 00:19:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:12.879 00:19:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:12.879 00:19:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:13.138 ************************************ 00:38:13.138 START TEST nvmf_interrupt 00:38:13.138 ************************************ 00:38:13.138 00:19:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:38:13.138 * Looking for test storage... 00:38:13.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:38:13.138 00:19:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:13.138 00:19:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:38:13.138 00:19:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:13.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.138 --rc genhtml_branch_coverage=1 00:38:13.138 --rc genhtml_function_coverage=1 00:38:13.138 --rc genhtml_legend=1 00:38:13.138 --rc geninfo_all_blocks=1 00:38:13.138 --rc geninfo_unexecuted_blocks=1 00:38:13.138 00:38:13.138 ' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:13.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.138 --rc genhtml_branch_coverage=1 00:38:13.138 --rc genhtml_function_coverage=1 00:38:13.138 --rc genhtml_legend=1 00:38:13.138 --rc geninfo_all_blocks=1 00:38:13.138 --rc geninfo_unexecuted_blocks=1 00:38:13.138 00:38:13.138 ' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:13.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.138 --rc genhtml_branch_coverage=1 00:38:13.138 --rc genhtml_function_coverage=1 00:38:13.138 --rc genhtml_legend=1 00:38:13.138 --rc geninfo_all_blocks=1 00:38:13.138 --rc geninfo_unexecuted_blocks=1 00:38:13.138 00:38:13.138 ' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:13.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:13.138 --rc genhtml_branch_coverage=1 00:38:13.138 --rc genhtml_function_coverage=1 00:38:13.138 --rc genhtml_legend=1 00:38:13.138 --rc geninfo_all_blocks=1 00:38:13.138 --rc geninfo_unexecuted_blocks=1 00:38:13.138 00:38:13.138 ' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/interrupt/common.sh 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:13.138 00:19:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:38:13.397 00:19:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:38:19.972 Found 0000:86:00.0 (0x8086 - 0x159b) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:38:19.972 Found 0000:86:00.1 (0x8086 - 0x159b) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:19.972 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:38:19.973 Found net devices under 0000:86:00.0: cvl_0_0 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:38:19.973 Found net devices under 0000:86:00.1: cvl_0_1 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:19.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:19.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:38:19.973 00:38:19.973 --- 10.0.0.2 ping statistics --- 00:38:19.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:19.973 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:19.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:19.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:38:19.973 00:38:19.973 --- 10.0.0.1 ping statistics --- 00:38:19.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:19.973 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=609669 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 609669 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 609669 ']' 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:19.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:19.973 00:19:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:19.973 [2024-12-10 00:19:53.974841] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:19.973 [2024-12-10 00:19:53.975791] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:19.974 [2024-12-10 00:19:53.975825] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:19.974 [2024-12-10 00:19:54.056074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:19.974 [2024-12-10 00:19:54.096118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:19.974 [2024-12-10 00:19:54.096150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:19.974 [2024-12-10 00:19:54.096164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:19.974 [2024-12-10 00:19:54.096171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:19.974 [2024-12-10 00:19:54.096176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:19.974 [2024-12-10 00:19:54.097342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.974 [2024-12-10 00:19:54.097345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.974 [2024-12-10 00:19:54.164337] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:19.974 [2024-12-10 00:19:54.164838] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:19.974 [2024-12-10 00:19:54.165090] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:38:19.974 5000+0 records in 00:38:19.974 5000+0 records out 00:38:19.974 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0166134 s, 616 MB/s 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aiofile AIO0 2048 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:19.974 AIO0 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:19.974 [2024-12-10 00:19:54.290134] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:19.974 [2024-12-10 00:19:54.330447] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 609669 0 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 609669 0 idle 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=609669 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 609669 -w 256 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 609669 root 20 0 128.2g 46080 33792 S 6.7 0.0 0:00.25 reactor_0' 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 609669 root 20 0 128.2g 46080 33792 S 6.7 0.0 0:00.25 reactor_0 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 609669 1 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 609669 1 idle 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=609669 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:19.974 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 609669 -w 256 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 609712 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 609712 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=609748 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 609669 0 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 609669 0 busy 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=609669 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 609669 -w 256 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 609669 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.44 reactor_0' 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 609669 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.44 reactor_0 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:38:19.975 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 609669 1 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 609669 1 busy 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=609669 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 609669 -w 256 00:38:20.235 00:19:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:20.235 00:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 609712 root 20 0 128.2g 46848 33792 R 93.3 0.0 0:00.27 reactor_1' 00:38:20.235 00:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 609712 root 20 0 128.2g 46848 33792 R 93.3 0.0 0:00.27 reactor_1 00:38:20.235 00:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:20.235 00:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:20.235 00:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:38:20.235 00:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:38:20.235 00:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:38:20.235 00:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:38:20.235 00:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:38:20.235 00:19:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:20.235 00:19:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 609748 00:38:30.217 Initializing NVMe Controllers 00:38:30.217 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:30.217 Controller IO queue size 256, less than required. 00:38:30.217 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:30.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:30.217 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:30.217 Initialization complete. Launching workers. 00:38:30.217 ======================================================== 00:38:30.217 Latency(us) 00:38:30.217 Device Information : IOPS MiB/s Average min max 00:38:30.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16072.80 62.78 15934.55 3637.06 31216.85 00:38:30.217 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16239.60 63.44 15767.84 8181.06 26960.14 00:38:30.217 ======================================================== 00:38:30.217 Total : 32312.39 126.22 15850.76 3637.06 31216.85 00:38:30.217 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 609669 0 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 609669 0 idle 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=609669 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:30.217 00:20:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 609669 -w 256 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 609669 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.24 reactor_0' 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 609669 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.24 reactor_0 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 609669 1 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 609669 1 idle 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=609669 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 609669 -w 256 00:38:30.217 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:30.477 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 609712 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:38:30.477 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 609712 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:38:30.477 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:30.477 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:30.477 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:30.477 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:30.477 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:30.477 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:30.477 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:30.477 00:20:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:30.477 00:20:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:38:30.736 00:20:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:38:30.736 00:20:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:38:30.736 00:20:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:38:30.736 00:20:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:38:30.736 00:20:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 609669 0 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 609669 0 idle 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=609669 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 609669 -w 256 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 609669 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.47 reactor_0' 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 609669 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.47 reactor_0 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 609669 1 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 609669 1 idle 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=609669 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 609669 -w 256 00:38:33.273 00:20:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:38:33.273 00:20:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 609712 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1' 00:38:33.273 00:20:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 609712 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1 00:38:33.273 00:20:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:38:33.273 00:20:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:38:33.273 00:20:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:38:33.273 00:20:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:38:33.273 00:20:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:38:33.274 00:20:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:38:33.274 00:20:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:38:33.274 00:20:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:38:33.274 00:20:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:38:33.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:38:33.274 00:20:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:38:33.274 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:38:33.274 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:38:33.274 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:33.533 rmmod nvme_tcp 00:38:33.533 rmmod nvme_fabrics 00:38:33.533 rmmod nvme_keyring 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 609669 ']' 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 609669 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 609669 ']' 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 609669 00:38:33.533 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:38:33.534 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:33.534 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 609669 00:38:33.534 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:33.534 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:33.534 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 609669' 00:38:33.534 killing process with pid 609669 00:38:33.534 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 609669 00:38:33.534 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 609669 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:38:33.800 00:20:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:35.740 00:20:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:35.740 00:38:35.740 real 0m22.788s 00:38:35.740 user 0m39.617s 00:38:35.740 sys 0m8.371s 00:38:35.740 00:20:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:35.740 00:20:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:35.740 ************************************ 00:38:35.740 END TEST nvmf_interrupt 00:38:35.740 ************************************ 00:38:35.740 00:38:35.740 real 27m32.164s 00:38:35.740 user 56m53.696s 00:38:35.740 sys 9m13.059s 00:38:36.000 00:20:10 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:36.000 00:20:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:36.000 ************************************ 00:38:36.000 END TEST nvmf_tcp 00:38:36.000 ************************************ 00:38:36.000 00:20:10 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:38:36.000 00:20:10 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:36.000 00:20:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:36.000 00:20:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:36.000 00:20:10 -- common/autotest_common.sh@10 -- # set +x 00:38:36.000 ************************************ 00:38:36.000 START TEST spdkcli_nvmf_tcp 00:38:36.000 ************************************ 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:38:36.000 * Looking for test storage... 00:38:36.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:36.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.000 --rc genhtml_branch_coverage=1 00:38:36.000 --rc genhtml_function_coverage=1 00:38:36.000 --rc genhtml_legend=1 00:38:36.000 --rc geninfo_all_blocks=1 00:38:36.000 --rc geninfo_unexecuted_blocks=1 00:38:36.000 00:38:36.000 ' 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:36.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.000 --rc genhtml_branch_coverage=1 00:38:36.000 --rc genhtml_function_coverage=1 00:38:36.000 --rc genhtml_legend=1 00:38:36.000 --rc geninfo_all_blocks=1 00:38:36.000 --rc geninfo_unexecuted_blocks=1 00:38:36.000 00:38:36.000 ' 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:36.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.000 --rc genhtml_branch_coverage=1 00:38:36.000 --rc genhtml_function_coverage=1 00:38:36.000 --rc genhtml_legend=1 00:38:36.000 --rc geninfo_all_blocks=1 00:38:36.000 --rc geninfo_unexecuted_blocks=1 00:38:36.000 00:38:36.000 ' 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:36.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.000 --rc genhtml_branch_coverage=1 00:38:36.000 --rc genhtml_function_coverage=1 00:38:36.000 --rc genhtml_legend=1 00:38:36.000 --rc geninfo_all_blocks=1 00:38:36.000 --rc geninfo_unexecuted_blocks=1 00:38:36.000 00:38:36.000 ' 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/common.sh 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:38:36.000 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.260 00:20:10 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:36.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=612434 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 612434 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 612434 ']' 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:36.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:36.261 00:20:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:36.261 [2024-12-10 00:20:11.016215] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:36.261 [2024-12-10 00:20:11.016261] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612434 ] 00:38:36.261 [2024-12-10 00:20:11.092731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:36.261 [2024-12-10 00:20:11.136218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:36.261 [2024-12-10 00:20:11.136220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.520 00:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:36.520 00:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:38:36.520 00:20:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:38:36.520 00:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:36.520 00:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:36.520 00:20:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:38:36.520 00:20:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:38:36.520 00:20:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:38:36.520 00:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:36.520 00:20:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:36.520 00:20:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:38:36.520 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:38:36.520 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:38:36.520 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:38:36.520 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:38:36.520 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:38:36.520 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:38:36.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:36.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:36.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:38:36.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:38:36.520 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:38:36.520 ' 00:38:39.824 [2024-12-10 00:20:14.035308] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:40.760 [2024-12-10 00:20:15.375784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:38:43.295 [2024-12-10 00:20:17.867583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:38:45.204 [2024-12-10 00:20:20.030399] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:38:47.112 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:38:47.112 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:38:47.112 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:38:47.112 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:38:47.112 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:38:47.112 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:38:47.112 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:38:47.112 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:47.112 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:47.112 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:38:47.112 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:38:47.112 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:38:47.112 00:20:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:38:47.112 00:20:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.112 00:20:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:47.112 00:20:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:38:47.112 00:20:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:47.113 00:20:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:47.113 00:20:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:38:47.113 00:20:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdkcli.py ll /nvmf 00:38:47.372 00:20:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:38:47.372 00:20:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:38:47.372 00:20:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:38:47.372 00:20:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.372 00:20:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:47.631 00:20:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:38:47.631 00:20:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:47.631 00:20:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:47.631 00:20:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:38:47.631 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:38:47.631 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:47.631 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:38:47.631 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:38:47.631 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:38:47.631 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:38:47.631 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:38:47.631 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:38:47.631 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:38:47.631 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:38:47.631 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:38:47.631 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:38:47.631 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:38:47.631 ' 00:38:52.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:38:52.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:38:52.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:52.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:38:52.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:38:52.909 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:38:52.909 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:38:52.909 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:38:52.909 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:38:52.909 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:38:52.909 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:38:52.909 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:38:52.909 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:38:52.909 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:38:53.170 00:20:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:38:53.170 00:20:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:53.170 00:20:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.170 00:20:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 612434 00:38:53.170 00:20:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 612434 ']' 00:38:53.170 00:20:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 612434 00:38:53.170 00:20:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:38:53.170 00:20:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:53.170 00:20:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 612434 00:38:53.170 00:20:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:53.170 00:20:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:53.170 00:20:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 612434' 00:38:53.170 killing process with pid 612434 00:38:53.170 00:20:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 612434 00:38:53.170 00:20:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 612434 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 612434 ']' 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 612434 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 612434 ']' 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 612434 00:38:53.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (612434) - No such process 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 612434 is not found' 00:38:53.430 Process with pid 612434 is not found 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:38:53.430 00:38:53.430 real 0m17.413s 00:38:53.430 user 0m38.271s 00:38:53.430 sys 0m0.883s 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:53.430 00:20:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:53.430 ************************************ 00:38:53.430 END TEST spdkcli_nvmf_tcp 00:38:53.430 ************************************ 00:38:53.430 00:20:28 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:53.430 00:20:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:53.430 00:20:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:53.430 00:20:28 -- common/autotest_common.sh@10 -- # set +x 00:38:53.430 ************************************ 00:38:53.430 START TEST nvmf_identify_passthru 00:38:53.430 ************************************ 00:38:53.430 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:38:53.430 * Looking for test storage... 00:38:53.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:38:53.430 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:53.430 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:38:53.430 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:53.691 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:53.691 00:20:28 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:38:53.691 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:53.691 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:53.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.691 --rc genhtml_branch_coverage=1 00:38:53.691 --rc genhtml_function_coverage=1 00:38:53.691 --rc genhtml_legend=1 00:38:53.691 --rc geninfo_all_blocks=1 00:38:53.691 --rc geninfo_unexecuted_blocks=1 00:38:53.691 00:38:53.691 ' 00:38:53.691 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:53.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.691 --rc genhtml_branch_coverage=1 00:38:53.691 --rc genhtml_function_coverage=1 00:38:53.691 --rc genhtml_legend=1 00:38:53.691 --rc geninfo_all_blocks=1 00:38:53.691 --rc geninfo_unexecuted_blocks=1 00:38:53.691 00:38:53.691 ' 00:38:53.691 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:53.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.691 --rc genhtml_branch_coverage=1 00:38:53.691 --rc genhtml_function_coverage=1 00:38:53.691 --rc genhtml_legend=1 00:38:53.691 --rc geninfo_all_blocks=1 00:38:53.691 --rc geninfo_unexecuted_blocks=1 00:38:53.691 00:38:53.691 ' 00:38:53.691 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:53.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:53.691 --rc genhtml_branch_coverage=1 00:38:53.691 --rc genhtml_function_coverage=1 00:38:53.691 --rc genhtml_legend=1 00:38:53.691 --rc geninfo_all_blocks=1 00:38:53.691 --rc geninfo_unexecuted_blocks=1 00:38:53.691 00:38:53.691 ' 00:38:53.691 00:20:28 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:38:53.691 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:38:53.692 00:20:28 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:53.692 00:20:28 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:53.692 00:20:28 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:53.692 00:20:28 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:53.692 00:20:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.692 00:20:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.692 00:20:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.692 00:20:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:53.692 00:20:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:53.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:53.692 00:20:28 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:38:53.692 00:20:28 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:38:53.692 00:20:28 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:53.692 00:20:28 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:53.692 00:20:28 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:53.692 00:20:28 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.692 00:20:28 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.692 00:20:28 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.692 00:20:28 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:38:53.692 00:20:28 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:53.692 00:20:28 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:53.692 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:53.692 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:53.692 00:20:28 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:38:53.692 00:20:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:00.315 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:00.315 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:00.315 Found net devices under 0000:86:00.0: cvl_0_0 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:00.315 Found net devices under 0000:86:00.1: cvl_0_1 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:00.315 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:00.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:00.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:39:00.316 00:39:00.316 --- 10.0.0.2 ping statistics --- 00:39:00.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:00.316 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:00.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:00.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:39:00.316 00:39:00.316 --- 10.0.0.1 ping statistics --- 00:39:00.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:00.316 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:00.316 00:20:34 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:00.316 00:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:00.316 00:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:39:00.316 00:20:34 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:39:00.316 00:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:39:00.316 00:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:39:00.316 00:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:39:00.316 00:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:39:00.316 00:20:34 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:39:04.515 00:20:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:39:04.515 00:20:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:39:04.515 00:20:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:39:04.515 00:20:38 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:39:08.712 00:20:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:39:08.712 00:20:42 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:39:08.712 00:20:42 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.712 00:20:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:08.712 00:20:42 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:39:08.712 00:20:42 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:08.712 00:20:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:08.712 00:20:42 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=619686 00:39:08.712 00:20:42 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:39:08.712 00:20:42 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:08.712 00:20:42 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 619686 00:39:08.712 00:20:42 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 619686 ']' 00:39:08.712 00:20:42 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.712 00:20:42 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.712 00:20:42 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.712 00:20:42 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.712 00:20:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:08.712 [2024-12-10 00:20:42.964347] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:39:08.712 [2024-12-10 00:20:42.964399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:08.712 [2024-12-10 00:20:43.045099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:08.712 [2024-12-10 00:20:43.087588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:08.712 [2024-12-10 00:20:43.087628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:08.712 [2024-12-10 00:20:43.087636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:08.712 [2024-12-10 00:20:43.087642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:08.712 [2024-12-10 00:20:43.087648] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:08.712 [2024-12-10 00:20:43.089069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:08.712 [2024-12-10 00:20:43.089240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:08.712 [2024-12-10 00:20:43.089541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:08.712 [2024-12-10 00:20:43.089542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.972 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.972 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:39:08.972 00:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:39:08.972 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.972 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:08.972 INFO: Log level set to 20 00:39:08.972 INFO: Requests: 00:39:08.972 { 00:39:08.972 "jsonrpc": "2.0", 00:39:08.972 "method": "nvmf_set_config", 00:39:08.972 "id": 1, 00:39:08.972 "params": { 00:39:08.972 "admin_cmd_passthru": { 00:39:08.972 "identify_ctrlr": true 00:39:08.972 } 00:39:08.972 } 00:39:08.972 } 00:39:08.972 00:39:08.972 INFO: response: 00:39:08.972 { 00:39:08.972 "jsonrpc": "2.0", 00:39:08.972 "id": 1, 00:39:08.972 "result": true 00:39:08.972 } 00:39:08.972 00:39:08.972 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.972 00:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:39:08.972 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.972 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:08.972 INFO: Setting log level to 20 00:39:08.972 INFO: Setting log level to 20 00:39:08.972 INFO: Log level set to 20 00:39:08.972 INFO: Log level set to 20 00:39:08.972 INFO: Requests: 00:39:08.972 { 00:39:08.972 "jsonrpc": "2.0", 00:39:08.972 "method": "framework_start_init", 00:39:08.972 "id": 1 00:39:08.972 } 00:39:08.972 00:39:08.972 INFO: Requests: 00:39:08.972 { 00:39:08.972 "jsonrpc": "2.0", 00:39:08.972 "method": "framework_start_init", 00:39:08.972 "id": 1 00:39:08.972 } 00:39:08.972 00:39:08.972 [2024-12-10 00:20:43.883447] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:39:08.972 INFO: response: 00:39:08.972 { 00:39:08.972 "jsonrpc": "2.0", 00:39:08.972 "id": 1, 00:39:08.972 "result": true 00:39:08.972 } 00:39:08.972 00:39:08.972 INFO: response: 00:39:08.972 { 00:39:08.972 "jsonrpc": "2.0", 00:39:08.972 "id": 1, 00:39:08.972 "result": true 00:39:08.972 } 00:39:08.972 00:39:08.972 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.973 00:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:08.973 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.973 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:08.973 INFO: Setting log level to 40 00:39:08.973 INFO: Setting log level to 40 00:39:08.973 INFO: Setting log level to 40 00:39:08.973 [2024-12-10 00:20:43.896712] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:08.973 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.973 00:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:39:08.973 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.973 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:09.232 00:20:43 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:39:09.232 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.232 00:20:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:12.525 Nvme0n1 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.525 00:20:46 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.525 00:20:46 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.525 00:20:46 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:12.525 [2024-12-10 00:20:46.804467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.525 00:20:46 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.525 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:12.525 [ 00:39:12.525 { 00:39:12.525 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:39:12.525 "subtype": "Discovery", 00:39:12.525 "listen_addresses": [], 00:39:12.525 "allow_any_host": true, 00:39:12.525 "hosts": [] 00:39:12.525 }, 00:39:12.525 { 00:39:12.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:39:12.525 "subtype": "NVMe", 00:39:12.525 "listen_addresses": [ 00:39:12.525 { 00:39:12.526 "trtype": "TCP", 00:39:12.526 "adrfam": "IPv4", 00:39:12.526 "traddr": "10.0.0.2", 00:39:12.526 "trsvcid": "4420" 00:39:12.526 } 00:39:12.526 ], 00:39:12.526 "allow_any_host": true, 00:39:12.526 "hosts": [], 00:39:12.526 "serial_number": "SPDK00000000000001", 00:39:12.526 "model_number": "SPDK bdev Controller", 00:39:12.526 "max_namespaces": 1, 00:39:12.526 "min_cntlid": 1, 00:39:12.526 "max_cntlid": 65519, 00:39:12.526 "namespaces": [ 00:39:12.526 { 00:39:12.526 "nsid": 1, 00:39:12.526 "bdev_name": "Nvme0n1", 00:39:12.526 "name": "Nvme0n1", 00:39:12.526 "nguid": "71195507CC714FD9B8503F09CE0C8D81", 00:39:12.526 "uuid": "71195507-cc71-4fd9-b850-3f09ce0c8d81" 00:39:12.526 } 00:39:12.526 ] 00:39:12.526 } 00:39:12.526 ] 00:39:12.526 00:20:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.526 00:20:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:12.526 00:20:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:39:12.526 00:20:46 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:39:12.526 00:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:39:12.526 00:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:39:12.526 00:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:39:12.526 00:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:39:12.526 00:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:39:12.526 00:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:39:12.526 00:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:39:12.526 00:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:12.526 00:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:39:12.526 00:20:47 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:39:12.526 00:20:47 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:12.526 00:20:47 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:39:12.526 00:20:47 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:12.526 00:20:47 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:39:12.526 00:20:47 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:12.526 00:20:47 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:12.526 rmmod nvme_tcp 00:39:12.526 rmmod nvme_fabrics 00:39:12.526 rmmod nvme_keyring 00:39:12.526 00:20:47 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:12.526 00:20:47 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:39:12.526 00:20:47 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:39:12.526 00:20:47 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 619686 ']' 00:39:12.526 00:20:47 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 619686 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 619686 ']' 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 619686 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 619686 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 619686' 00:39:12.526 killing process with pid 619686 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 619686 00:39:12.526 00:20:47 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 619686 00:39:13.908 00:20:48 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:13.908 00:20:48 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:13.908 00:20:48 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:13.908 00:20:48 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:39:13.909 00:20:48 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:39:13.909 00:20:48 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:13.909 00:20:48 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:39:13.909 00:20:48 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:13.909 00:20:48 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:13.909 00:20:48 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:13.909 00:20:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:13.909 00:20:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.452 00:20:50 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:16.452 00:39:16.452 real 0m22.620s 00:39:16.452 user 0m29.658s 00:39:16.452 sys 0m6.264s 00:39:16.452 00:20:50 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:16.452 00:20:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:39:16.452 ************************************ 00:39:16.452 END TEST nvmf_identify_passthru 00:39:16.452 ************************************ 00:39:16.452 00:20:50 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/dif.sh 00:39:16.452 00:20:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:16.452 00:20:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:16.452 00:20:50 -- common/autotest_common.sh@10 -- # set +x 00:39:16.452 ************************************ 00:39:16.452 START TEST nvmf_dif 00:39:16.452 ************************************ 00:39:16.452 00:20:50 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/dif.sh 00:39:16.452 * Looking for test storage... 00:39:16.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:39:16.452 00:20:51 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:16.452 00:20:51 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:39:16.452 00:20:51 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:16.452 00:20:51 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:16.452 00:20:51 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:16.452 00:20:51 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:16.452 00:20:51 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:39:16.453 00:20:51 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:16.453 00:20:51 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:16.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.453 --rc genhtml_branch_coverage=1 00:39:16.453 --rc genhtml_function_coverage=1 00:39:16.453 --rc genhtml_legend=1 00:39:16.453 --rc geninfo_all_blocks=1 00:39:16.453 --rc geninfo_unexecuted_blocks=1 00:39:16.453 00:39:16.453 ' 00:39:16.453 00:20:51 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:16.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.453 --rc genhtml_branch_coverage=1 00:39:16.453 --rc genhtml_function_coverage=1 00:39:16.453 --rc genhtml_legend=1 00:39:16.453 --rc geninfo_all_blocks=1 00:39:16.453 --rc geninfo_unexecuted_blocks=1 00:39:16.453 00:39:16.453 ' 00:39:16.453 00:20:51 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:16.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.453 --rc genhtml_branch_coverage=1 00:39:16.453 --rc genhtml_function_coverage=1 00:39:16.453 --rc genhtml_legend=1 00:39:16.453 --rc geninfo_all_blocks=1 00:39:16.453 --rc geninfo_unexecuted_blocks=1 00:39:16.453 00:39:16.453 ' 00:39:16.453 00:20:51 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:16.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.453 --rc genhtml_branch_coverage=1 00:39:16.453 --rc genhtml_function_coverage=1 00:39:16.453 --rc genhtml_legend=1 00:39:16.453 --rc geninfo_all_blocks=1 00:39:16.453 --rc geninfo_unexecuted_blocks=1 00:39:16.453 00:39:16.453 ' 00:39:16.453 00:20:51 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.453 00:20:51 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.453 00:20:51 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.453 00:20:51 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.453 00:20:51 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.453 00:20:51 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:39:16.453 00:20:51 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.453 00:20:51 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:16.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:16.454 00:20:51 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:39:16.454 00:20:51 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:39:16.454 00:20:51 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:39:16.454 00:20:51 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:39:16.454 00:20:51 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.454 00:20:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:16.454 00:20:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:16.454 00:20:51 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:39:16.454 00:20:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:39:23.048 Found 0000:86:00.0 (0x8086 - 0x159b) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:39:23.048 Found 0000:86:00.1 (0x8086 - 0x159b) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:39:23.048 Found net devices under 0000:86:00.0: cvl_0_0 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:39:23.048 Found net devices under 0000:86:00.1: cvl_0_1 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:23.048 00:20:56 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:23.048 00:20:57 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:23.048 00:20:57 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:23.048 00:20:57 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:23.048 00:20:57 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:23.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:23.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:39:23.048 00:39:23.048 --- 10.0.0.2 ping statistics --- 00:39:23.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.048 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:39:23.048 00:20:57 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:23.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:23.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:39:23.048 00:39:23.048 --- 10.0.0.1 ping statistics --- 00:39:23.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:23.048 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:39:23.048 00:20:57 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:23.048 00:20:57 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:39:23.048 00:20:57 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:39:23.048 00:20:57 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:39:25.088 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:39:25.088 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:39:25.088 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:39:25.088 00:20:59 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:25.088 00:20:59 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:25.088 00:20:59 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:25.088 00:20:59 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:25.088 00:20:59 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:25.088 00:20:59 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:25.088 00:20:59 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:39:25.088 00:20:59 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:39:25.088 00:20:59 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:25.088 00:20:59 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:25.088 00:20:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:25.088 00:20:59 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=625381 00:39:25.088 00:20:59 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 625381 00:39:25.088 00:20:59 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:39:25.088 00:20:59 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 625381 ']' 00:39:25.088 00:20:59 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:25.088 00:20:59 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:25.088 00:20:59 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:25.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:25.088 00:20:59 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:25.088 00:20:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:25.088 [2024-12-10 00:20:59.976834] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:39:25.088 [2024-12-10 00:20:59.976876] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:25.378 [2024-12-10 00:21:00.057588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.378 [2024-12-10 00:21:00.098213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:25.378 [2024-12-10 00:21:00.098251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:25.378 [2024-12-10 00:21:00.098259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:25.378 [2024-12-10 00:21:00.098265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:25.378 [2024-12-10 00:21:00.098271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:25.378 [2024-12-10 00:21:00.098831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.378 00:21:00 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:25.378 00:21:00 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:39:25.379 00:21:00 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:25.379 00:21:00 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:25.379 00:21:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:25.379 00:21:00 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:25.379 00:21:00 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:39:25.379 00:21:00 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:39:25.379 00:21:00 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.379 00:21:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:25.379 [2024-12-10 00:21:00.230848] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:25.379 00:21:00 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.379 00:21:00 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:39:25.379 00:21:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:25.379 00:21:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:25.379 00:21:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:25.379 ************************************ 00:39:25.379 START TEST fio_dif_1_default 00:39:25.379 ************************************ 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:25.379 bdev_null0 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:25.379 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:25.379 [2024-12-10 00:21:00.303173] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:25.686 { 00:39:25.686 "params": { 00:39:25.686 "name": "Nvme$subsystem", 00:39:25.686 "trtype": "$TEST_TRANSPORT", 00:39:25.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:25.686 "adrfam": "ipv4", 00:39:25.686 "trsvcid": "$NVMF_PORT", 00:39:25.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:25.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:25.686 "hdgst": ${hdgst:-false}, 00:39:25.686 "ddgst": ${ddgst:-false} 00:39:25.686 }, 00:39:25.686 "method": "bdev_nvme_attach_controller" 00:39:25.686 } 00:39:25.686 EOF 00:39:25.686 )") 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:25.686 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:25.687 "params": { 00:39:25.687 "name": "Nvme0", 00:39:25.687 "trtype": "tcp", 00:39:25.687 "traddr": "10.0.0.2", 00:39:25.687 "adrfam": "ipv4", 00:39:25.687 "trsvcid": "4420", 00:39:25.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:25.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:25.687 "hdgst": false, 00:39:25.687 "ddgst": false 00:39:25.687 }, 00:39:25.687 "method": "bdev_nvme_attach_controller" 00:39:25.687 }' 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:39:25.687 00:21:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:26.008 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:26.008 fio-3.35 00:39:26.008 Starting 1 thread 00:39:38.301 00:39:38.301 filename0: (groupid=0, jobs=1): err= 0: pid=625829: Tue Dec 10 00:21:11 2024 00:39:38.301 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:39:38.301 slat (nsec): min=5716, max=35187, avg=6453.06, stdev=1370.44 00:39:38.301 clat (usec): min=40836, max=42031, avg=40998.51, stdev=132.40 00:39:38.301 lat (usec): min=40842, max=42038, avg=41004.96, stdev=132.70 00:39:38.301 clat percentiles (usec): 00:39:38.301 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:38.301 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:38.301 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:38.301 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:38.301 | 99.99th=[42206] 00:39:38.301 bw ( KiB/s): min= 384, max= 416, per=99.47%, avg=388.80, stdev=11.72, samples=20 00:39:38.301 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:39:38.301 lat (msec) : 50=100.00% 00:39:38.301 cpu : usr=92.41%, sys=7.31%, ctx=9, majf=0, minf=0 00:39:38.301 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:38.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:38.301 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:38.301 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:38.301 00:39:38.301 Run status group 0 (all jobs): 00:39:38.301 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10009-10009msec 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.301 00:39:38.301 real 0m11.213s 00:39:38.301 user 0m15.572s 00:39:38.301 sys 0m1.099s 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:39:38.301 ************************************ 00:39:38.301 END TEST fio_dif_1_default 00:39:38.301 ************************************ 00:39:38.301 00:21:11 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:39:38.301 00:21:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:38.301 00:21:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:38.301 00:21:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:38.301 ************************************ 00:39:38.301 START TEST fio_dif_1_multi_subsystems 00:39:38.301 ************************************ 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:38.301 bdev_null0 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.301 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:38.302 [2024-12-10 00:21:11.581346] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:38.302 bdev_null1 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:38.302 { 00:39:38.302 "params": { 00:39:38.302 "name": "Nvme$subsystem", 00:39:38.302 "trtype": "$TEST_TRANSPORT", 00:39:38.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.302 "adrfam": "ipv4", 00:39:38.302 "trsvcid": "$NVMF_PORT", 00:39:38.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.302 "hdgst": ${hdgst:-false}, 00:39:38.302 "ddgst": ${ddgst:-false} 00:39:38.302 }, 00:39:38.302 "method": "bdev_nvme_attach_controller" 00:39:38.302 } 00:39:38.302 EOF 00:39:38.302 )") 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:38.302 { 00:39:38.302 "params": { 00:39:38.302 "name": "Nvme$subsystem", 00:39:38.302 "trtype": "$TEST_TRANSPORT", 00:39:38.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:38.302 "adrfam": "ipv4", 00:39:38.302 "trsvcid": "$NVMF_PORT", 00:39:38.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:38.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:38.302 "hdgst": ${hdgst:-false}, 00:39:38.302 "ddgst": ${ddgst:-false} 00:39:38.302 }, 00:39:38.302 "method": "bdev_nvme_attach_controller" 00:39:38.302 } 00:39:38.302 EOF 00:39:38.302 )") 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:38.302 "params": { 00:39:38.302 "name": "Nvme0", 00:39:38.302 "trtype": "tcp", 00:39:38.302 "traddr": "10.0.0.2", 00:39:38.302 "adrfam": "ipv4", 00:39:38.302 "trsvcid": "4420", 00:39:38.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:38.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:38.302 "hdgst": false, 00:39:38.302 "ddgst": false 00:39:38.302 }, 00:39:38.302 "method": "bdev_nvme_attach_controller" 00:39:38.302 },{ 00:39:38.302 "params": { 00:39:38.302 "name": "Nvme1", 00:39:38.302 "trtype": "tcp", 00:39:38.302 "traddr": "10.0.0.2", 00:39:38.302 "adrfam": "ipv4", 00:39:38.302 "trsvcid": "4420", 00:39:38.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:38.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:38.302 "hdgst": false, 00:39:38.302 "ddgst": false 00:39:38.302 }, 00:39:38.302 "method": "bdev_nvme_attach_controller" 00:39:38.302 }' 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:39:38.302 00:21:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:38.302 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:38.302 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:39:38.302 fio-3.35 00:39:38.302 Starting 2 threads 00:39:48.312 00:39:48.312 filename0: (groupid=0, jobs=1): err= 0: pid=628144: Tue Dec 10 00:21:22 2024 00:39:48.312 read: IOPS=146, BW=586KiB/s (600kB/s)(5872KiB/10017msec) 00:39:48.312 slat (nsec): min=6167, max=30466, avg=7498.13, stdev=2354.40 00:39:48.312 clat (usec): min=382, max=42541, avg=27270.70, stdev=19285.44 00:39:48.312 lat (usec): min=388, max=42548, avg=27278.20, stdev=19285.19 00:39:48.312 clat percentiles (usec): 00:39:48.312 | 1.00th=[ 400], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 420], 00:39:48.312 | 30.00th=[ 570], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:39:48.312 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:39:48.312 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:39:48.312 | 99.99th=[42730] 00:39:48.312 bw ( KiB/s): min= 384, max= 832, per=59.94%, avg=585.60, stdev=185.46, samples=20 00:39:48.312 iops : min= 96, max= 208, avg=146.40, stdev=46.37, samples=20 00:39:48.312 lat (usec) : 500=26.70%, 750=7.36% 00:39:48.312 lat (msec) : 50=65.94% 00:39:48.312 cpu : usr=96.74%, sys=3.02%, ctx=12, majf=0, minf=140 00:39:48.312 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.312 issued rwts: total=1468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.312 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:48.312 filename1: (groupid=0, jobs=1): err= 0: pid=628145: Tue Dec 10 00:21:22 2024 00:39:48.312 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:39:48.312 slat (nsec): min=6159, max=28007, avg=7992.84, stdev=2736.26 00:39:48.312 clat (usec): min=40762, max=41974, avg=40990.21, stdev=121.69 00:39:48.312 lat (usec): min=40768, max=41986, avg=40998.20, stdev=121.96 00:39:48.312 clat percentiles (usec): 00:39:48.312 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:48.312 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:48.312 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:48.312 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:48.312 | 99.99th=[42206] 00:39:48.312 bw ( KiB/s): min= 384, max= 416, per=39.76%, avg=388.80, stdev=11.72, samples=20 00:39:48.312 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:39:48.312 lat (msec) : 50=100.00% 00:39:48.312 cpu : usr=97.04%, sys=2.71%, ctx=14, majf=0, minf=130 00:39:48.312 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:48.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.312 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:48.312 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:48.312 latency : target=0, window=0, percentile=100.00%, depth=4 00:39:48.312 00:39:48.312 Run status group 0 (all jobs): 00:39:48.312 READ: bw=976KiB/s (999kB/s), 390KiB/s-586KiB/s (399kB/s-600kB/s), io=9776KiB (10.0MB), run=10008-10017msec 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:48.312 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.313 00:39:48.313 real 0m11.251s 00:39:48.313 user 0m25.847s 00:39:48.313 sys 0m0.902s 00:39:48.313 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:48.313 00:21:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:39:48.313 ************************************ 00:39:48.313 END TEST fio_dif_1_multi_subsystems 00:39:48.313 ************************************ 00:39:48.313 00:21:22 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:39:48.313 00:21:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:48.313 00:21:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:48.313 00:21:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:48.313 ************************************ 00:39:48.313 START TEST fio_dif_rand_params 00:39:48.313 ************************************ 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:48.313 bdev_null0 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:48.313 [2024-12-10 00:21:22.905702] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:48.313 { 00:39:48.313 "params": { 00:39:48.313 "name": "Nvme$subsystem", 00:39:48.313 "trtype": "$TEST_TRANSPORT", 00:39:48.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:48.313 "adrfam": "ipv4", 00:39:48.313 "trsvcid": "$NVMF_PORT", 00:39:48.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:48.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:48.313 "hdgst": ${hdgst:-false}, 00:39:48.313 "ddgst": ${ddgst:-false} 00:39:48.313 }, 00:39:48.313 "method": "bdev_nvme_attach_controller" 00:39:48.313 } 00:39:48.313 EOF 00:39:48.313 )") 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:48.313 "params": { 00:39:48.313 "name": "Nvme0", 00:39:48.313 "trtype": "tcp", 00:39:48.313 "traddr": "10.0.0.2", 00:39:48.313 "adrfam": "ipv4", 00:39:48.313 "trsvcid": "4420", 00:39:48.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:48.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:48.313 "hdgst": false, 00:39:48.313 "ddgst": false 00:39:48.313 }, 00:39:48.313 "method": "bdev_nvme_attach_controller" 00:39:48.313 }' 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:39:48.313 00:21:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:48.577 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:48.577 ... 00:39:48.577 fio-3.35 00:39:48.577 Starting 3 threads 00:39:55.141 00:39:55.141 filename0: (groupid=0, jobs=1): err= 0: pid=629993: Tue Dec 10 00:21:28 2024 00:39:55.141 read: IOPS=320, BW=40.1MiB/s (42.0MB/s)(202MiB/5046msec) 00:39:55.141 slat (nsec): min=6505, max=59811, avg=13466.53, stdev=5900.82 00:39:55.141 clat (usec): min=4262, max=52481, avg=9312.78, stdev=6077.11 00:39:55.141 lat (usec): min=4272, max=52504, avg=9326.25, stdev=6077.01 00:39:55.141 clat percentiles (usec): 00:39:55.141 | 1.00th=[ 5473], 5.00th=[ 6194], 10.00th=[ 6587], 20.00th=[ 7439], 00:39:55.141 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8848], 00:39:55.141 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10552], 00:39:55.141 | 99.00th=[50070], 99.50th=[50594], 99.90th=[52691], 99.95th=[52691], 00:39:55.141 | 99.99th=[52691] 00:39:55.141 bw ( KiB/s): min=26880, max=50176, per=35.49%, avg=41369.60, stdev=6434.83, samples=10 00:39:55.141 iops : min= 210, max= 392, avg=323.20, stdev=50.27, samples=10 00:39:55.141 lat (msec) : 10=90.23%, 20=7.60%, 50=1.30%, 100=0.87% 00:39:55.141 cpu : usr=95.50%, sys=4.14%, ctx=25, majf=0, minf=65 00:39:55.141 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:55.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.141 issued rwts: total=1618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:55.141 filename0: (groupid=0, jobs=1): err= 0: pid=629994: Tue Dec 10 00:21:28 2024 00:39:55.141 read: IOPS=302, BW=37.8MiB/s (39.7MB/s)(189MiB/5004msec) 00:39:55.141 slat (nsec): min=6458, max=55339, avg=12268.17, stdev=4764.59 00:39:55.141 clat (usec): min=3389, max=51958, avg=9894.12, stdev=4499.71 00:39:55.141 lat (usec): min=3397, max=51972, avg=9906.38, stdev=4500.09 00:39:55.141 clat percentiles (usec): 00:39:55.141 | 1.00th=[ 3720], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7373], 00:39:55.141 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10421], 00:39:55.141 | 70.00th=[10945], 80.00th=[11469], 90.00th=[11994], 95.00th=[12518], 00:39:55.141 | 99.00th=[13960], 99.50th=[49546], 99.90th=[51119], 99.95th=[52167], 00:39:55.141 | 99.99th=[52167] 00:39:55.141 bw ( KiB/s): min=35328, max=40704, per=33.43%, avg=38968.89, stdev=1901.90, samples=9 00:39:55.141 iops : min= 276, max= 318, avg=304.44, stdev=14.86, samples=9 00:39:55.141 lat (msec) : 4=2.18%, 10=49.24%, 20=47.59%, 50=0.59%, 100=0.40% 00:39:55.141 cpu : usr=96.08%, sys=3.60%, ctx=7, majf=0, minf=78 00:39:55.141 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:55.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.141 issued rwts: total=1515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:55.141 filename0: (groupid=0, jobs=1): err= 0: pid=629995: Tue Dec 10 00:21:28 2024 00:39:55.141 read: IOPS=292, BW=36.5MiB/s (38.3MB/s)(183MiB/5004msec) 00:39:55.141 slat (nsec): min=6373, max=47273, avg=11915.05, stdev=4530.68 00:39:55.141 clat (usec): min=3603, max=52347, avg=10252.54, stdev=6789.62 00:39:55.141 lat (usec): min=3610, max=52359, avg=10264.45, stdev=6789.76 00:39:55.141 clat percentiles (usec): 00:39:55.141 | 1.00th=[ 4424], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7963], 00:39:55.141 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:39:55.141 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11338], 95.00th=[11994], 00:39:55.141 | 99.00th=[50594], 99.50th=[51643], 99.90th=[52167], 99.95th=[52167], 00:39:55.141 | 99.99th=[52167] 00:39:55.141 bw ( KiB/s): min=28160, max=44544, per=31.97%, avg=37262.22, stdev=5408.07, samples=9 00:39:55.141 iops : min= 220, max= 348, avg=291.11, stdev=42.25, samples=9 00:39:55.141 lat (msec) : 4=0.41%, 10=65.32%, 20=31.60%, 50=0.96%, 100=1.71% 00:39:55.141 cpu : usr=95.90%, sys=3.78%, ctx=16, majf=0, minf=44 00:39:55.141 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:55.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:55.141 issued rwts: total=1462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:55.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:55.141 00:39:55.141 Run status group 0 (all jobs): 00:39:55.141 READ: bw=114MiB/s (119MB/s), 36.5MiB/s-40.1MiB/s (38.3MB/s-42.0MB/s), io=574MiB (602MB), run=5004-5046msec 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 bdev_null0 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:21:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 [2024-12-10 00:21:29.015440] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 bdev_null1 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.141 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.142 bdev_null2 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:55.142 { 00:39:55.142 "params": { 00:39:55.142 "name": "Nvme$subsystem", 00:39:55.142 "trtype": "$TEST_TRANSPORT", 00:39:55.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:55.142 "adrfam": "ipv4", 00:39:55.142 "trsvcid": "$NVMF_PORT", 00:39:55.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:55.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:55.142 "hdgst": ${hdgst:-false}, 00:39:55.142 "ddgst": ${ddgst:-false} 00:39:55.142 }, 00:39:55.142 "method": "bdev_nvme_attach_controller" 00:39:55.142 } 00:39:55.142 EOF 00:39:55.142 )") 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:55.142 { 00:39:55.142 "params": { 00:39:55.142 "name": "Nvme$subsystem", 00:39:55.142 "trtype": "$TEST_TRANSPORT", 00:39:55.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:55.142 "adrfam": "ipv4", 00:39:55.142 "trsvcid": "$NVMF_PORT", 00:39:55.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:55.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:55.142 "hdgst": ${hdgst:-false}, 00:39:55.142 "ddgst": ${ddgst:-false} 00:39:55.142 }, 00:39:55.142 "method": "bdev_nvme_attach_controller" 00:39:55.142 } 00:39:55.142 EOF 00:39:55.142 )") 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:55.142 { 00:39:55.142 "params": { 00:39:55.142 "name": "Nvme$subsystem", 00:39:55.142 "trtype": "$TEST_TRANSPORT", 00:39:55.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:55.142 "adrfam": "ipv4", 00:39:55.142 "trsvcid": "$NVMF_PORT", 00:39:55.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:55.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:55.142 "hdgst": ${hdgst:-false}, 00:39:55.142 "ddgst": ${ddgst:-false} 00:39:55.142 }, 00:39:55.142 "method": "bdev_nvme_attach_controller" 00:39:55.142 } 00:39:55.142 EOF 00:39:55.142 )") 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:55.142 "params": { 00:39:55.142 "name": "Nvme0", 00:39:55.142 "trtype": "tcp", 00:39:55.142 "traddr": "10.0.0.2", 00:39:55.142 "adrfam": "ipv4", 00:39:55.142 "trsvcid": "4420", 00:39:55.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:55.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:55.142 "hdgst": false, 00:39:55.142 "ddgst": false 00:39:55.142 }, 00:39:55.142 "method": "bdev_nvme_attach_controller" 00:39:55.142 },{ 00:39:55.142 "params": { 00:39:55.142 "name": "Nvme1", 00:39:55.142 "trtype": "tcp", 00:39:55.142 "traddr": "10.0.0.2", 00:39:55.142 "adrfam": "ipv4", 00:39:55.142 "trsvcid": "4420", 00:39:55.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:55.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:55.142 "hdgst": false, 00:39:55.142 "ddgst": false 00:39:55.142 }, 00:39:55.142 "method": "bdev_nvme_attach_controller" 00:39:55.142 },{ 00:39:55.142 "params": { 00:39:55.142 "name": "Nvme2", 00:39:55.142 "trtype": "tcp", 00:39:55.142 "traddr": "10.0.0.2", 00:39:55.142 "adrfam": "ipv4", 00:39:55.142 "trsvcid": "4420", 00:39:55.142 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:39:55.142 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:39:55.142 "hdgst": false, 00:39:55.142 "ddgst": false 00:39:55.142 }, 00:39:55.142 "method": "bdev_nvme_attach_controller" 00:39:55.142 }' 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:39:55.142 00:21:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:55.142 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:55.142 ... 00:39:55.142 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:55.142 ... 00:39:55.142 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:39:55.142 ... 00:39:55.142 fio-3.35 00:39:55.143 Starting 24 threads 00:40:07.342 00:40:07.342 filename0: (groupid=0, jobs=1): err= 0: pid=631263: Tue Dec 10 00:21:40 2024 00:40:07.342 read: IOPS=564, BW=2257KiB/s (2312kB/s)(22.1MiB/10008msec) 00:40:07.342 slat (nsec): min=7341, max=64545, avg=14048.71, stdev=6827.58 00:40:07.342 clat (usec): min=9837, max=35015, avg=28230.45, stdev=1492.68 00:40:07.342 lat (usec): min=9849, max=35073, avg=28244.49, stdev=1490.96 00:40:07.342 clat percentiles (usec): 00:40:07.342 | 1.00th=[19530], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:40:07.342 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:40:07.342 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28705], 95.00th=[28705], 00:40:07.342 | 99.00th=[29230], 99.50th=[31065], 99.90th=[34341], 99.95th=[34341], 00:40:07.342 | 99.99th=[34866] 00:40:07.342 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2256.84, stdev=76.45, samples=19 00:40:07.342 iops : min= 544, max= 608, avg=564.21, stdev=19.11, samples=19 00:40:07.342 lat (msec) : 10=0.18%, 20=0.96%, 50=98.87% 00:40:07.342 cpu : usr=98.43%, sys=1.22%, ctx=14, majf=0, minf=11 00:40:07.342 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:07.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.342 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.342 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.342 filename0: (groupid=0, jobs=1): err= 0: pid=631264: Tue Dec 10 00:21:40 2024 00:40:07.342 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10003msec) 00:40:07.342 slat (nsec): min=4619, max=59113, avg=25184.87, stdev=8363.22 00:40:07.342 clat (usec): min=13752, max=51467, avg=28259.22, stdev=1559.18 00:40:07.342 lat (usec): min=13764, max=51480, avg=28284.41, stdev=1558.84 00:40:07.342 clat percentiles (usec): 00:40:07.342 | 1.00th=[27919], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:40:07.342 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:40:07.342 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:40:07.342 | 99.00th=[29492], 99.50th=[33817], 99.90th=[51643], 99.95th=[51643], 00:40:07.342 | 99.99th=[51643] 00:40:07.342 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2236.84, stdev=77.78, samples=19 00:40:07.342 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:40:07.342 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:40:07.342 cpu : usr=98.56%, sys=1.09%, ctx=8, majf=0, minf=9 00:40:07.342 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:07.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.342 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.342 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.342 filename0: (groupid=0, jobs=1): err= 0: pid=631265: Tue Dec 10 00:21:40 2024 00:40:07.342 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10002msec) 00:40:07.342 slat (nsec): min=7505, max=73517, avg=27505.83, stdev=14023.97 00:40:07.342 clat (usec): min=13965, max=50878, avg=28205.85, stdev=1533.10 00:40:07.342 lat (usec): min=13982, max=50895, avg=28233.35, stdev=1533.82 00:40:07.342 clat percentiles (usec): 00:40:07.342 | 1.00th=[27919], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:40:07.342 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:40:07.342 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:40:07.342 | 99.00th=[29492], 99.50th=[33817], 99.90th=[50594], 99.95th=[51119], 00:40:07.342 | 99.99th=[51119] 00:40:07.342 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2236.84, stdev=77.78, samples=19 00:40:07.342 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:40:07.342 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:40:07.342 cpu : usr=98.63%, sys=0.97%, ctx=14, majf=0, minf=9 00:40:07.342 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:07.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.342 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.342 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.342 filename0: (groupid=0, jobs=1): err= 0: pid=631266: Tue Dec 10 00:21:40 2024 00:40:07.342 read: IOPS=564, BW=2257KiB/s (2312kB/s)(22.1MiB/10008msec) 00:40:07.342 slat (nsec): min=7127, max=53686, avg=19914.38, stdev=7404.97 00:40:07.342 clat (usec): min=9522, max=33988, avg=28188.06, stdev=1486.84 00:40:07.342 lat (usec): min=9541, max=34003, avg=28207.97, stdev=1486.57 00:40:07.342 clat percentiles (usec): 00:40:07.342 | 1.00th=[23725], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:40:07.342 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.342 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.342 | 99.00th=[29492], 99.50th=[30540], 99.90th=[33817], 99.95th=[33817], 00:40:07.342 | 99.99th=[33817] 00:40:07.342 bw ( KiB/s): min= 2176, max= 2436, per=4.17%, avg=2253.00, stdev=77.07, samples=20 00:40:07.342 iops : min= 544, max= 609, avg=563.25, stdev=19.27, samples=20 00:40:07.342 lat (msec) : 10=0.25%, 20=0.46%, 50=99.29% 00:40:07.342 cpu : usr=98.36%, sys=1.29%, ctx=14, majf=0, minf=9 00:40:07.342 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:07.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.342 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.342 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.342 filename0: (groupid=0, jobs=1): err= 0: pid=631267: Tue Dec 10 00:21:40 2024 00:40:07.342 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10003msec) 00:40:07.342 slat (nsec): min=4948, max=54978, avg=26208.46, stdev=7647.11 00:40:07.342 clat (usec): min=13712, max=51108, avg=28264.16, stdev=1554.97 00:40:07.342 lat (usec): min=13735, max=51122, avg=28290.37, stdev=1554.43 00:40:07.342 clat percentiles (usec): 00:40:07.342 | 1.00th=[27919], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:40:07.342 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:40:07.342 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:40:07.342 | 99.00th=[29492], 99.50th=[33817], 99.90th=[51119], 99.95th=[51119], 00:40:07.342 | 99.99th=[51119] 00:40:07.342 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2236.84, stdev=77.78, samples=19 00:40:07.342 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:40:07.342 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:40:07.342 cpu : usr=98.35%, sys=1.31%, ctx=16, majf=0, minf=9 00:40:07.342 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:07.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.342 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.342 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.342 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.342 filename0: (groupid=0, jobs=1): err= 0: pid=631268: Tue Dec 10 00:21:40 2024 00:40:07.342 read: IOPS=561, BW=2246KiB/s (2299kB/s)(21.9MiB/10004msec) 00:40:07.342 slat (nsec): min=4426, max=54186, avg=24614.04, stdev=8345.65 00:40:07.342 clat (usec): min=13724, max=44437, avg=28285.24, stdev=1555.36 00:40:07.342 lat (usec): min=13751, max=44450, avg=28309.86, stdev=1554.66 00:40:07.342 clat percentiles (usec): 00:40:07.342 | 1.00th=[21890], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:40:07.343 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:40:07.343 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.343 | 99.00th=[34866], 99.50th=[34866], 99.90th=[44303], 99.95th=[44303], 00:40:07.343 | 99.99th=[44303] 00:40:07.343 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2236.63, stdev=77.03, samples=19 00:40:07.343 iops : min= 512, max= 576, avg=559.16, stdev=19.26, samples=19 00:40:07.343 lat (msec) : 20=0.28%, 50=99.72% 00:40:07.343 cpu : usr=98.40%, sys=1.26%, ctx=14, majf=0, minf=9 00:40:07.343 IO depths : 1=5.4%, 2=11.6%, 4=24.5%, 8=51.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:40:07.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.343 filename0: (groupid=0, jobs=1): err= 0: pid=631269: Tue Dec 10 00:21:40 2024 00:40:07.343 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.0MiB/10010msec) 00:40:07.343 slat (nsec): min=6308, max=56589, avg=25178.07, stdev=7962.45 00:40:07.343 clat (usec): min=13603, max=34457, avg=28226.94, stdev=830.51 00:40:07.343 lat (usec): min=13610, max=34474, avg=28252.12, stdev=830.72 00:40:07.343 clat percentiles (usec): 00:40:07.343 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:40:07.343 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:40:07.343 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.343 | 99.00th=[29230], 99.50th=[30540], 99.90th=[33817], 99.95th=[33817], 00:40:07.343 | 99.99th=[34341] 00:40:07.343 bw ( KiB/s): min= 2158, max= 2304, per=4.15%, avg=2245.50, stdev=66.47, samples=20 00:40:07.343 iops : min= 539, max= 576, avg=561.35, stdev=16.65, samples=20 00:40:07.343 lat (msec) : 20=0.28%, 50=99.72% 00:40:07.343 cpu : usr=98.48%, sys=1.18%, ctx=14, majf=0, minf=9 00:40:07.343 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:07.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.343 filename0: (groupid=0, jobs=1): err= 0: pid=631270: Tue Dec 10 00:21:40 2024 00:40:07.343 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10001msec) 00:40:07.343 slat (nsec): min=5804, max=38232, avg=17613.99, stdev=5120.99 00:40:07.343 clat (usec): min=19093, max=36358, avg=28334.00, stdev=779.47 00:40:07.343 lat (usec): min=19110, max=36372, avg=28351.62, stdev=779.35 00:40:07.343 clat percentiles (usec): 00:40:07.343 | 1.00th=[27657], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:40:07.343 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.343 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.343 | 99.00th=[29492], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:40:07.343 | 99.99th=[36439] 00:40:07.343 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2243.37, stdev=65.66, samples=19 00:40:07.343 iops : min= 544, max= 576, avg=560.84, stdev=16.42, samples=19 00:40:07.343 lat (msec) : 20=0.28%, 50=99.72% 00:40:07.343 cpu : usr=98.39%, sys=1.25%, ctx=13, majf=0, minf=9 00:40:07.343 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:07.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.343 filename1: (groupid=0, jobs=1): err= 0: pid=631271: Tue Dec 10 00:21:40 2024 00:40:07.343 read: IOPS=562, BW=2250KiB/s (2304kB/s)(22.0MiB/10011msec) 00:40:07.343 slat (nsec): min=6762, max=38719, avg=17365.61, stdev=5570.95 00:40:07.343 clat (usec): min=19066, max=34494, avg=28289.04, stdev=811.26 00:40:07.343 lat (usec): min=19082, max=34510, avg=28306.41, stdev=811.34 00:40:07.343 clat percentiles (usec): 00:40:07.343 | 1.00th=[26870], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:40:07.343 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.343 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.343 | 99.00th=[29230], 99.50th=[30540], 99.90th=[34341], 99.95th=[34341], 00:40:07.343 | 99.99th=[34341] 00:40:07.343 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2245.90, stdev=64.91, samples=20 00:40:07.343 iops : min= 544, max= 576, avg=561.45, stdev=16.21, samples=20 00:40:07.343 lat (msec) : 20=0.57%, 50=99.43% 00:40:07.343 cpu : usr=98.44%, sys=1.20%, ctx=14, majf=0, minf=9 00:40:07.343 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:07.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.343 filename1: (groupid=0, jobs=1): err= 0: pid=631272: Tue Dec 10 00:21:40 2024 00:40:07.343 read: IOPS=564, BW=2258KiB/s (2312kB/s)(22.1MiB/10004msec) 00:40:07.343 slat (nsec): min=7754, max=46557, avg=20069.91, stdev=6000.55 00:40:07.343 clat (usec): min=9494, max=34004, avg=28171.92, stdev=1582.14 00:40:07.343 lat (usec): min=9513, max=34034, avg=28191.99, stdev=1582.23 00:40:07.343 clat percentiles (usec): 00:40:07.343 | 1.00th=[19006], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:40:07.343 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.343 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.343 | 99.00th=[29492], 99.50th=[30278], 99.90th=[33817], 99.95th=[33817], 00:40:07.343 | 99.99th=[33817] 00:40:07.343 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2256.84, stdev=76.45, samples=19 00:40:07.343 iops : min= 544, max= 608, avg=564.21, stdev=19.11, samples=19 00:40:07.343 lat (msec) : 10=0.28%, 20=0.85%, 50=98.87% 00:40:07.343 cpu : usr=98.33%, sys=1.32%, ctx=17, majf=0, minf=9 00:40:07.343 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:07.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.343 filename1: (groupid=0, jobs=1): err= 0: pid=631273: Tue Dec 10 00:21:40 2024 00:40:07.343 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10001msec) 00:40:07.343 slat (nsec): min=5268, max=37135, avg=17255.89, stdev=5192.32 00:40:07.343 clat (usec): min=19192, max=36284, avg=28330.44, stdev=771.90 00:40:07.343 lat (usec): min=19208, max=36302, avg=28347.69, stdev=771.82 00:40:07.343 clat percentiles (usec): 00:40:07.343 | 1.00th=[27657], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:40:07.343 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.343 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.343 | 99.00th=[29492], 99.50th=[34341], 99.90th=[36439], 99.95th=[36439], 00:40:07.343 | 99.99th=[36439] 00:40:07.343 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2243.37, stdev=65.66, samples=19 00:40:07.343 iops : min= 544, max= 576, avg=560.84, stdev=16.42, samples=19 00:40:07.343 lat (msec) : 20=0.28%, 50=99.72% 00:40:07.343 cpu : usr=98.67%, sys=0.98%, ctx=14, majf=0, minf=9 00:40:07.343 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:07.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.343 filename1: (groupid=0, jobs=1): err= 0: pid=631274: Tue Dec 10 00:21:40 2024 00:40:07.343 read: IOPS=564, BW=2257KiB/s (2312kB/s)(22.1MiB/10008msec) 00:40:07.343 slat (nsec): min=8237, max=56418, avg=21984.15, stdev=7469.61 00:40:07.343 clat (usec): min=9523, max=33982, avg=28170.68, stdev=1507.78 00:40:07.343 lat (usec): min=9541, max=33997, avg=28192.67, stdev=1507.79 00:40:07.343 clat percentiles (usec): 00:40:07.343 | 1.00th=[20317], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:40:07.343 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.343 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.343 | 99.00th=[29492], 99.50th=[30540], 99.90th=[33817], 99.95th=[33817], 00:40:07.343 | 99.99th=[33817] 00:40:07.343 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2252.80, stdev=76.58, samples=20 00:40:07.343 iops : min= 544, max= 608, avg=563.20, stdev=19.14, samples=20 00:40:07.343 lat (msec) : 10=0.25%, 20=0.60%, 50=99.15% 00:40:07.343 cpu : usr=98.44%, sys=1.20%, ctx=12, majf=0, minf=9 00:40:07.343 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:07.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.343 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.343 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.343 filename1: (groupid=0, jobs=1): err= 0: pid=631275: Tue Dec 10 00:21:40 2024 00:40:07.343 read: IOPS=562, BW=2248KiB/s (2302kB/s)(22.0MiB/10016msec) 00:40:07.343 slat (nsec): min=7261, max=78597, avg=19925.56, stdev=8521.15 00:40:07.343 clat (usec): min=18596, max=33587, avg=28273.59, stdev=698.77 00:40:07.343 lat (usec): min=18611, max=33622, avg=28293.51, stdev=699.91 00:40:07.343 clat percentiles (usec): 00:40:07.343 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:40:07.343 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.343 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:40:07.343 | 99.00th=[28967], 99.50th=[31327], 99.90th=[33162], 99.95th=[33424], 00:40:07.343 | 99.99th=[33817] 00:40:07.343 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2246.40, stdev=65.33, samples=20 00:40:07.343 iops : min= 544, max= 576, avg=561.60, stdev=16.33, samples=20 00:40:07.343 lat (msec) : 20=0.25%, 50=99.75% 00:40:07.343 cpu : usr=98.64%, sys=0.99%, ctx=9, majf=0, minf=9 00:40:07.344 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:07.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 issued rwts: total=5630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.344 filename1: (groupid=0, jobs=1): err= 0: pid=631276: Tue Dec 10 00:21:40 2024 00:40:07.344 read: IOPS=571, BW=2287KiB/s (2342kB/s)(22.4MiB/10019msec) 00:40:07.344 slat (nsec): min=7606, max=81796, avg=14644.08, stdev=7605.90 00:40:07.344 clat (usec): min=2392, max=34704, avg=27863.02, stdev=3272.31 00:40:07.344 lat (usec): min=2411, max=34714, avg=27877.66, stdev=3271.37 00:40:07.344 clat percentiles (usec): 00:40:07.344 | 1.00th=[ 5211], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:40:07.344 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:40:07.344 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.344 | 99.00th=[29230], 99.50th=[31589], 99.90th=[33424], 99.95th=[33424], 00:40:07.344 | 99.99th=[34866] 00:40:07.344 bw ( KiB/s): min= 2176, max= 2944, per=4.23%, avg=2284.80, stdev=167.54, samples=20 00:40:07.344 iops : min= 544, max= 736, avg=571.20, stdev=41.88, samples=20 00:40:07.344 lat (msec) : 4=0.84%, 10=0.84%, 20=0.84%, 50=97.49% 00:40:07.344 cpu : usr=98.34%, sys=1.29%, ctx=50, majf=0, minf=10 00:40:07.344 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:07.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.344 filename1: (groupid=0, jobs=1): err= 0: pid=631277: Tue Dec 10 00:21:40 2024 00:40:07.344 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10019msec) 00:40:07.344 slat (nsec): min=7447, max=51833, avg=16751.87, stdev=6713.37 00:40:07.344 clat (usec): min=3314, max=34002, avg=28002.77, stdev=2652.45 00:40:07.344 lat (usec): min=3338, max=34016, avg=28019.52, stdev=2651.78 00:40:07.344 clat percentiles (usec): 00:40:07.344 | 1.00th=[ 9765], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:40:07.344 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:40:07.344 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.344 | 99.00th=[29492], 99.50th=[30540], 99.90th=[33817], 99.95th=[33817], 00:40:07.344 | 99.99th=[33817] 00:40:07.344 bw ( KiB/s): min= 2176, max= 2688, per=4.20%, avg=2272.00, stdev=116.54, samples=20 00:40:07.344 iops : min= 544, max= 672, avg=568.00, stdev=29.13, samples=20 00:40:07.344 lat (msec) : 4=0.44%, 10=0.68%, 20=0.81%, 50=98.07% 00:40:07.344 cpu : usr=98.39%, sys=1.24%, ctx=14, majf=0, minf=9 00:40:07.344 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:40:07.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.344 filename1: (groupid=0, jobs=1): err= 0: pid=631278: Tue Dec 10 00:21:40 2024 00:40:07.344 read: IOPS=571, BW=2287KiB/s (2342kB/s)(22.4MiB/10019msec) 00:40:07.344 slat (nsec): min=7507, max=78651, avg=19202.90, stdev=9470.07 00:40:07.344 clat (usec): min=2494, max=33596, avg=27821.28, stdev=3268.13 00:40:07.344 lat (usec): min=2519, max=33611, avg=27840.48, stdev=3268.09 00:40:07.344 clat percentiles (usec): 00:40:07.344 | 1.00th=[ 4883], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:40:07.344 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.344 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.344 | 99.00th=[28967], 99.50th=[31327], 99.90th=[33424], 99.95th=[33424], 00:40:07.344 | 99.99th=[33817] 00:40:07.344 bw ( KiB/s): min= 2176, max= 2944, per=4.23%, avg=2284.80, stdev=167.54, samples=20 00:40:07.344 iops : min= 544, max= 736, avg=571.20, stdev=41.88, samples=20 00:40:07.344 lat (msec) : 4=0.84%, 10=0.84%, 20=0.84%, 50=97.49% 00:40:07.344 cpu : usr=98.23%, sys=1.41%, ctx=14, majf=0, minf=9 00:40:07.344 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:07.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.344 filename2: (groupid=0, jobs=1): err= 0: pid=631279: Tue Dec 10 00:21:40 2024 00:40:07.344 read: IOPS=563, BW=2256KiB/s (2310kB/s)(22.1MiB/10012msec) 00:40:07.344 slat (nsec): min=5226, max=66674, avg=19940.90, stdev=7506.00 00:40:07.344 clat (usec): min=13900, max=46099, avg=28200.08, stdev=1717.96 00:40:07.344 lat (usec): min=13923, max=46111, avg=28220.02, stdev=1718.52 00:40:07.344 clat percentiles (usec): 00:40:07.344 | 1.00th=[20579], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:40:07.344 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.344 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.344 | 99.00th=[33817], 99.50th=[35914], 99.90th=[45876], 99.95th=[45876], 00:40:07.344 | 99.99th=[45876] 00:40:07.344 bw ( KiB/s): min= 2176, max= 2448, per=4.16%, avg=2249.26, stdev=78.98, samples=19 00:40:07.344 iops : min= 544, max= 612, avg=562.32, stdev=19.75, samples=19 00:40:07.344 lat (msec) : 20=0.60%, 50=99.40% 00:40:07.344 cpu : usr=98.32%, sys=1.34%, ctx=14, majf=0, minf=9 00:40:07.344 IO depths : 1=5.9%, 2=11.8%, 4=24.1%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:40:07.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 issued rwts: total=5646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.344 filename2: (groupid=0, jobs=1): err= 0: pid=631280: Tue Dec 10 00:21:40 2024 00:40:07.344 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.1MiB/10002msec) 00:40:07.344 slat (nsec): min=7108, max=78649, avg=19596.13, stdev=8708.84 00:40:07.344 clat (usec): min=9678, max=35024, avg=28158.01, stdev=1638.05 00:40:07.344 lat (usec): min=9686, max=35039, avg=28177.60, stdev=1639.32 00:40:07.344 clat percentiles (usec): 00:40:07.344 | 1.00th=[17695], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:40:07.344 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.344 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.344 | 99.00th=[29230], 99.50th=[31589], 99.90th=[34866], 99.95th=[34866], 00:40:07.344 | 99.99th=[34866] 00:40:07.344 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2256.84, stdev=76.45, samples=19 00:40:07.344 iops : min= 544, max= 608, avg=564.21, stdev=19.11, samples=19 00:40:07.344 lat (msec) : 10=0.25%, 20=0.85%, 50=98.90% 00:40:07.344 cpu : usr=98.58%, sys=1.04%, ctx=14, majf=0, minf=9 00:40:07.344 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:07.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.344 filename2: (groupid=0, jobs=1): err= 0: pid=631281: Tue Dec 10 00:21:40 2024 00:40:07.344 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.0MiB/10016msec) 00:40:07.344 slat (nsec): min=7089, max=78721, avg=19923.47, stdev=8716.71 00:40:07.344 clat (usec): min=18433, max=37307, avg=28247.35, stdev=1102.66 00:40:07.344 lat (usec): min=18440, max=37314, avg=28267.27, stdev=1103.33 00:40:07.344 clat percentiles (usec): 00:40:07.344 | 1.00th=[21890], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:40:07.344 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.344 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.344 | 99.00th=[31851], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:40:07.344 | 99.99th=[37487] 00:40:07.344 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2248.80, stdev=63.47, samples=20 00:40:07.344 iops : min= 544, max= 576, avg=562.20, stdev=15.87, samples=20 00:40:07.344 lat (msec) : 20=0.48%, 50=99.52% 00:40:07.344 cpu : usr=98.37%, sys=1.29%, ctx=14, majf=0, minf=9 00:40:07.344 IO depths : 1=5.9%, 2=12.1%, 4=24.6%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:40:07.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 issued rwts: total=5636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.344 filename2: (groupid=0, jobs=1): err= 0: pid=631282: Tue Dec 10 00:21:40 2024 00:40:07.344 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10003msec) 00:40:07.344 slat (nsec): min=5582, max=47381, avg=23327.50, stdev=6185.26 00:40:07.344 clat (usec): min=13755, max=51002, avg=28284.08, stdev=1537.72 00:40:07.344 lat (usec): min=13776, max=51018, avg=28307.41, stdev=1537.34 00:40:07.344 clat percentiles (usec): 00:40:07.344 | 1.00th=[27919], 5.00th=[27919], 10.00th=[28181], 20.00th=[28181], 00:40:07.344 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:40:07.344 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:40:07.344 | 99.00th=[29492], 99.50th=[33817], 99.90th=[51119], 99.95th=[51119], 00:40:07.344 | 99.99th=[51119] 00:40:07.344 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2236.84, stdev=77.78, samples=19 00:40:07.344 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:40:07.344 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:40:07.344 cpu : usr=98.55%, sys=1.11%, ctx=12, majf=0, minf=9 00:40:07.344 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:40:07.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.344 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.344 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.344 filename2: (groupid=0, jobs=1): err= 0: pid=631283: Tue Dec 10 00:21:40 2024 00:40:07.344 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10003msec) 00:40:07.344 slat (nsec): min=4488, max=54394, avg=26439.49, stdev=7679.46 00:40:07.344 clat (usec): min=13673, max=56004, avg=28265.64, stdev=1563.29 00:40:07.344 lat (usec): min=13687, max=56018, avg=28292.08, stdev=1562.70 00:40:07.344 clat percentiles (usec): 00:40:07.344 | 1.00th=[27919], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:40:07.344 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:40:07.345 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:40:07.345 | 99.00th=[29492], 99.50th=[33817], 99.90th=[50594], 99.95th=[50594], 00:40:07.345 | 99.99th=[55837] 00:40:07.345 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2236.84, stdev=77.78, samples=19 00:40:07.345 iops : min= 513, max= 576, avg=559.21, stdev=19.44, samples=19 00:40:07.345 lat (msec) : 20=0.28%, 50=99.43%, 100=0.28% 00:40:07.345 cpu : usr=98.35%, sys=1.30%, ctx=14, majf=0, minf=9 00:40:07.345 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:07.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.345 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.345 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.345 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.345 filename2: (groupid=0, jobs=1): err= 0: pid=631284: Tue Dec 10 00:21:40 2024 00:40:07.345 read: IOPS=561, BW=2246KiB/s (2299kB/s)(21.9MiB/10004msec) 00:40:07.345 slat (nsec): min=4775, max=55592, avg=25160.07, stdev=8378.26 00:40:07.345 clat (usec): min=11826, max=51222, avg=28289.75, stdev=1740.98 00:40:07.345 lat (usec): min=11834, max=51236, avg=28314.91, stdev=1740.65 00:40:07.345 clat percentiles (usec): 00:40:07.345 | 1.00th=[22414], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:40:07.345 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:40:07.345 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.345 | 99.00th=[33817], 99.50th=[35390], 99.90th=[51119], 99.95th=[51119], 00:40:07.345 | 99.99th=[51119] 00:40:07.345 bw ( KiB/s): min= 2052, max= 2432, per=4.16%, avg=2246.60, stdev=86.29, samples=20 00:40:07.345 iops : min= 513, max= 608, avg=561.65, stdev=21.57, samples=20 00:40:07.345 lat (msec) : 20=0.32%, 50=99.39%, 100=0.28% 00:40:07.345 cpu : usr=98.24%, sys=1.41%, ctx=16, majf=0, minf=9 00:40:07.345 IO depths : 1=3.6%, 2=9.7%, 4=24.5%, 8=53.2%, 16=8.9%, 32=0.0%, >=64=0.0% 00:40:07.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.345 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.345 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.345 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.345 filename2: (groupid=0, jobs=1): err= 0: pid=631285: Tue Dec 10 00:21:40 2024 00:40:07.345 read: IOPS=560, BW=2244KiB/s (2297kB/s)(21.9MiB/10018msec) 00:40:07.345 slat (nsec): min=7744, max=79015, avg=18938.56, stdev=9765.49 00:40:07.345 clat (usec): min=17383, max=41865, avg=28318.15, stdev=705.94 00:40:07.345 lat (usec): min=17392, max=41907, avg=28337.09, stdev=706.53 00:40:07.345 clat percentiles (usec): 00:40:07.345 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28181], 20.00th=[28181], 00:40:07.345 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:40:07.345 | 70.00th=[28443], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:40:07.345 | 99.00th=[29230], 99.50th=[32637], 99.90th=[33424], 99.95th=[34341], 00:40:07.345 | 99.99th=[41681] 00:40:07.345 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2246.40, stdev=65.33, samples=20 00:40:07.345 iops : min= 544, max= 576, avg=561.60, stdev=16.33, samples=20 00:40:07.345 lat (msec) : 20=0.05%, 50=99.95% 00:40:07.345 cpu : usr=98.34%, sys=1.32%, ctx=10, majf=0, minf=9 00:40:07.345 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:40:07.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.345 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.345 issued rwts: total=5619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.345 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.345 filename2: (groupid=0, jobs=1): err= 0: pid=631286: Tue Dec 10 00:21:40 2024 00:40:07.345 read: IOPS=561, BW=2246KiB/s (2300kB/s)(21.9MiB/10003msec) 00:40:07.345 slat (nsec): min=7129, max=95977, avg=27195.23, stdev=15128.45 00:40:07.345 clat (usec): min=6417, max=50921, avg=28237.15, stdev=1605.11 00:40:07.345 lat (usec): min=6425, max=50935, avg=28264.35, stdev=1605.53 00:40:07.345 clat percentiles (usec): 00:40:07.345 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:40:07.345 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:40:07.345 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28443], 00:40:07.345 | 99.00th=[29492], 99.50th=[33817], 99.90th=[51119], 99.95th=[51119], 00:40:07.345 | 99.99th=[51119] 00:40:07.345 bw ( KiB/s): min= 2052, max= 2304, per=4.14%, avg=2236.84, stdev=73.84, samples=19 00:40:07.345 iops : min= 513, max= 576, avg=559.21, stdev=18.46, samples=19 00:40:07.345 lat (msec) : 10=0.04%, 20=0.28%, 50=99.39%, 100=0.28% 00:40:07.345 cpu : usr=98.59%, sys=1.02%, ctx=13, majf=0, minf=9 00:40:07.345 IO depths : 1=3.7%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:40:07.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.345 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:07.345 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:07.345 latency : target=0, window=0, percentile=100.00%, depth=16 00:40:07.345 00:40:07.345 Run status group 0 (all jobs): 00:40:07.345 READ: bw=52.8MiB/s (55.3MB/s), 2244KiB/s-2287KiB/s (2297kB/s-2342kB/s), io=529MiB (554MB), run=10001-10019msec 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.345 bdev_null0 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.345 [2024-12-10 00:21:40.992507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.345 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:40:07.346 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:40:07.346 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:40:07.346 00:21:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:40:07.346 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.346 00:21:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.346 bdev_null1 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:07.346 { 00:40:07.346 "params": { 00:40:07.346 "name": "Nvme$subsystem", 00:40:07.346 "trtype": "$TEST_TRANSPORT", 00:40:07.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.346 "adrfam": "ipv4", 00:40:07.346 "trsvcid": "$NVMF_PORT", 00:40:07.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.346 "hdgst": ${hdgst:-false}, 00:40:07.346 "ddgst": ${ddgst:-false} 00:40:07.346 }, 00:40:07.346 "method": "bdev_nvme_attach_controller" 00:40:07.346 } 00:40:07.346 EOF 00:40:07.346 )") 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:07.346 { 00:40:07.346 "params": { 00:40:07.346 "name": "Nvme$subsystem", 00:40:07.346 "trtype": "$TEST_TRANSPORT", 00:40:07.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.346 "adrfam": "ipv4", 00:40:07.346 "trsvcid": "$NVMF_PORT", 00:40:07.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.346 "hdgst": ${hdgst:-false}, 00:40:07.346 "ddgst": ${ddgst:-false} 00:40:07.346 }, 00:40:07.346 "method": "bdev_nvme_attach_controller" 00:40:07.346 } 00:40:07.346 EOF 00:40:07.346 )") 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:07.346 "params": { 00:40:07.346 "name": "Nvme0", 00:40:07.346 "trtype": "tcp", 00:40:07.346 "traddr": "10.0.0.2", 00:40:07.346 "adrfam": "ipv4", 00:40:07.346 "trsvcid": "4420", 00:40:07.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:07.346 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:07.346 "hdgst": false, 00:40:07.346 "ddgst": false 00:40:07.346 }, 00:40:07.346 "method": "bdev_nvme_attach_controller" 00:40:07.346 },{ 00:40:07.346 "params": { 00:40:07.346 "name": "Nvme1", 00:40:07.346 "trtype": "tcp", 00:40:07.346 "traddr": "10.0.0.2", 00:40:07.346 "adrfam": "ipv4", 00:40:07.346 "trsvcid": "4420", 00:40:07.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:07.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:07.346 "hdgst": false, 00:40:07.346 "ddgst": false 00:40:07.346 }, 00:40:07.346 "method": "bdev_nvme_attach_controller" 00:40:07.346 }' 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:40:07.346 00:21:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:07.346 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:07.346 ... 00:40:07.346 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:40:07.346 ... 00:40:07.346 fio-3.35 00:40:07.346 Starting 4 threads 00:40:12.609 00:40:12.609 filename0: (groupid=0, jobs=1): err= 0: pid=633228: Tue Dec 10 00:21:47 2024 00:40:12.609 read: IOPS=2645, BW=20.7MiB/s (21.7MB/s)(103MiB/5002msec) 00:40:12.609 slat (nsec): min=6244, max=42046, avg=8881.31, stdev=2779.34 00:40:12.609 clat (usec): min=791, max=5452, avg=3000.88, stdev=347.63 00:40:12.609 lat (usec): min=803, max=5463, avg=3009.76, stdev=347.63 00:40:12.609 clat percentiles (usec): 00:40:12.609 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2769], 00:40:12.609 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3097], 00:40:12.609 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3326], 95.00th=[ 3458], 00:40:12.609 | 99.00th=[ 4113], 99.50th=[ 4293], 99.90th=[ 4752], 99.95th=[ 4752], 00:40:12.609 | 99.99th=[ 4817] 00:40:12.609 bw ( KiB/s): min=20688, max=21856, per=25.71%, avg=21176.89, stdev=411.91, samples=9 00:40:12.609 iops : min= 2586, max= 2732, avg=2647.11, stdev=51.49, samples=9 00:40:12.609 lat (usec) : 1000=0.01% 00:40:12.609 lat (msec) : 2=0.88%, 4=97.89%, 10=1.22% 00:40:12.609 cpu : usr=95.50%, sys=4.18%, ctx=6, majf=0, minf=9 00:40:12.609 IO depths : 1=0.2%, 2=2.4%, 4=66.4%, 8=31.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:12.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.609 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.609 issued rwts: total=13231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:12.609 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:12.609 filename0: (groupid=0, jobs=1): err= 0: pid=633229: Tue Dec 10 00:21:47 2024 00:40:12.609 read: IOPS=2576, BW=20.1MiB/s (21.1MB/s)(101MiB/5002msec) 00:40:12.609 slat (usec): min=6, max=169, avg= 8.75, stdev= 3.24 00:40:12.609 clat (usec): min=1543, max=5751, avg=3079.58, stdev=373.64 00:40:12.609 lat (usec): min=1553, max=5762, avg=3088.33, stdev=373.58 00:40:12.609 clat percentiles (usec): 00:40:12.609 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2933], 00:40:12.609 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3097], 00:40:12.609 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3392], 95.00th=[ 3654], 00:40:12.609 | 99.00th=[ 4555], 99.50th=[ 4817], 99.90th=[ 5407], 99.95th=[ 5473], 00:40:12.609 | 99.99th=[ 5735] 00:40:12.609 bw ( KiB/s): min=19584, max=21568, per=25.04%, avg=20624.00, stdev=687.53, samples=9 00:40:12.609 iops : min= 2448, max= 2696, avg=2578.00, stdev=85.94, samples=9 00:40:12.609 lat (msec) : 2=0.50%, 4=96.87%, 10=2.62% 00:40:12.609 cpu : usr=96.02%, sys=3.64%, ctx=7, majf=0, minf=9 00:40:12.609 IO depths : 1=0.1%, 2=1.7%, 4=71.3%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:12.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.609 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.609 issued rwts: total=12887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:12.609 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:12.609 filename1: (groupid=0, jobs=1): err= 0: pid=633230: Tue Dec 10 00:21:47 2024 00:40:12.609 read: IOPS=2531, BW=19.8MiB/s (20.7MB/s)(98.9MiB/5001msec) 00:40:12.609 slat (nsec): min=6278, max=48323, avg=8714.03, stdev=2941.87 00:40:12.609 clat (usec): min=780, max=6314, avg=3134.90, stdev=385.12 00:40:12.609 lat (usec): min=795, max=6340, avg=3143.61, stdev=385.05 00:40:12.609 clat percentiles (usec): 00:40:12.609 | 1.00th=[ 2278], 5.00th=[ 2671], 10.00th=[ 2835], 20.00th=[ 2999], 00:40:12.609 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3097], 00:40:12.609 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3425], 95.00th=[ 3752], 00:40:12.609 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5342], 99.95th=[ 5997], 00:40:12.609 | 99.99th=[ 5997] 00:40:12.609 bw ( KiB/s): min=19520, max=20672, per=24.60%, avg=20262.22, stdev=353.67, samples=9 00:40:12.609 iops : min= 2440, max= 2584, avg=2532.78, stdev=44.21, samples=9 00:40:12.609 lat (usec) : 1000=0.02% 00:40:12.609 lat (msec) : 2=0.36%, 4=95.73%, 10=3.89% 00:40:12.609 cpu : usr=95.90%, sys=3.76%, ctx=7, majf=0, minf=9 00:40:12.609 IO depths : 1=0.1%, 2=1.3%, 4=70.6%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:12.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.609 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.609 issued rwts: total=12661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:12.609 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:12.609 filename1: (groupid=0, jobs=1): err= 0: pid=633231: Tue Dec 10 00:21:47 2024 00:40:12.609 read: IOPS=2544, BW=19.9MiB/s (20.8MB/s)(99.4MiB/5003msec) 00:40:12.609 slat (nsec): min=6278, max=41381, avg=8617.65, stdev=2851.15 00:40:12.609 clat (usec): min=1031, max=5734, avg=3118.59, stdev=336.08 00:40:12.609 lat (usec): min=1037, max=5740, avg=3127.21, stdev=336.00 00:40:12.609 clat percentiles (usec): 00:40:12.609 | 1.00th=[ 2278], 5.00th=[ 2671], 10.00th=[ 2868], 20.00th=[ 2999], 00:40:12.609 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3097], 00:40:12.609 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3392], 95.00th=[ 3654], 00:40:12.609 | 99.00th=[ 4555], 99.50th=[ 4817], 99.90th=[ 5407], 99.95th=[ 5538], 00:40:12.609 | 99.99th=[ 5669] 00:40:12.609 bw ( KiB/s): min=20032, max=20640, per=24.72%, avg=20361.78, stdev=251.77, samples=9 00:40:12.609 iops : min= 2504, max= 2580, avg=2545.22, stdev=31.47, samples=9 00:40:12.609 lat (msec) : 2=0.29%, 4=97.34%, 10=2.36% 00:40:12.609 cpu : usr=96.14%, sys=3.52%, ctx=6, majf=0, minf=9 00:40:12.609 IO depths : 1=0.1%, 2=1.1%, 4=72.3%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:12.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.609 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:12.609 issued rwts: total=12729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:12.609 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:12.609 00:40:12.609 Run status group 0 (all jobs): 00:40:12.609 READ: bw=80.4MiB/s (84.3MB/s), 19.8MiB/s-20.7MiB/s (20.7MB/s-21.7MB/s), io=402MiB (422MB), run=5001-5003msec 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:12.867 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.868 00:40:12.868 real 0m24.731s 00:40:12.868 user 4m52.211s 00:40:12.868 sys 0m5.343s 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:12.868 00:21:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:40:12.868 ************************************ 00:40:12.868 END TEST fio_dif_rand_params 00:40:12.868 ************************************ 00:40:12.868 00:21:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:40:12.868 00:21:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:12.868 00:21:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:12.868 00:21:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:12.868 ************************************ 00:40:12.868 START TEST fio_dif_digest 00:40:12.868 ************************************ 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:12.868 bdev_null0 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:12.868 [2024-12-10 00:21:47.702473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:12.868 { 00:40:12.868 "params": { 00:40:12.868 "name": "Nvme$subsystem", 00:40:12.868 "trtype": "$TEST_TRANSPORT", 00:40:12.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:12.868 "adrfam": "ipv4", 00:40:12.868 "trsvcid": "$NVMF_PORT", 00:40:12.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:12.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:12.868 "hdgst": ${hdgst:-false}, 00:40:12.868 "ddgst": ${ddgst:-false} 00:40:12.868 }, 00:40:12.868 "method": "bdev_nvme_attach_controller" 00:40:12.868 } 00:40:12.868 EOF 00:40:12.868 )") 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:12.868 "params": { 00:40:12.868 "name": "Nvme0", 00:40:12.868 "trtype": "tcp", 00:40:12.868 "traddr": "10.0.0.2", 00:40:12.868 "adrfam": "ipv4", 00:40:12.868 "trsvcid": "4420", 00:40:12.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:12.868 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:12.868 "hdgst": true, 00:40:12.868 "ddgst": true 00:40:12.868 }, 00:40:12.868 "method": "bdev_nvme_attach_controller" 00:40:12.868 }' 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:40:12.868 00:21:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:13.434 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:40:13.434 ... 00:40:13.434 fio-3.35 00:40:13.434 Starting 3 threads 00:40:25.626 00:40:25.626 filename0: (groupid=0, jobs=1): err= 0: pid=634291: Tue Dec 10 00:21:58 2024 00:40:25.626 read: IOPS=270, BW=33.8MiB/s (35.4MB/s)(340MiB/10047msec) 00:40:25.626 slat (nsec): min=6702, max=26838, avg=11867.25, stdev=1821.40 00:40:25.626 clat (usec): min=8015, max=51273, avg=11062.93, stdev=1302.69 00:40:25.626 lat (usec): min=8027, max=51286, avg=11074.80, stdev=1302.62 00:40:25.626 clat percentiles (usec): 00:40:25.626 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:40:25.626 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:40:25.626 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12387], 00:40:25.626 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14091], 99.95th=[47973], 00:40:25.626 | 99.99th=[51119] 00:40:25.626 bw ( KiB/s): min=33792, max=35328, per=33.96%, avg=34752.00, stdev=562.56, samples=20 00:40:25.626 iops : min= 264, max= 276, avg=271.50, stdev= 4.39, samples=20 00:40:25.626 lat (msec) : 10=8.28%, 20=91.65%, 50=0.04%, 100=0.04% 00:40:25.626 cpu : usr=94.68%, sys=5.02%, ctx=17, majf=0, minf=23 00:40:25.626 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.626 issued rwts: total=2717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.626 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:25.626 filename0: (groupid=0, jobs=1): err= 0: pid=634292: Tue Dec 10 00:21:58 2024 00:40:25.626 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(333MiB/10045msec) 00:40:25.626 slat (nsec): min=6623, max=30425, avg=11851.09, stdev=2100.02 00:40:25.626 clat (usec): min=8385, max=45743, avg=11257.98, stdev=990.65 00:40:25.626 lat (usec): min=8392, max=45758, avg=11269.83, stdev=990.69 00:40:25.626 clat percentiles (usec): 00:40:25.626 | 1.00th=[ 9634], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:40:25.626 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:40:25.626 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:40:25.626 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13829], 99.95th=[14615], 00:40:25.626 | 99.99th=[45876] 00:40:25.626 bw ( KiB/s): min=33280, max=35072, per=33.33%, avg=34112.00, stdev=556.39, samples=20 00:40:25.626 iops : min= 260, max= 274, avg=266.50, stdev= 4.35, samples=20 00:40:25.627 lat (msec) : 10=3.90%, 20=96.06%, 50=0.04% 00:40:25.627 cpu : usr=94.64%, sys=4.96%, ctx=23, majf=0, minf=10 00:40:25.627 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.627 issued rwts: total=2666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.627 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:25.627 filename0: (groupid=0, jobs=1): err= 0: pid=634293: Tue Dec 10 00:21:58 2024 00:40:25.627 read: IOPS=263, BW=33.0MiB/s (34.6MB/s)(331MiB/10045msec) 00:40:25.627 slat (nsec): min=6540, max=26104, avg=11446.49, stdev=2285.96 00:40:25.627 clat (usec): min=7225, max=48204, avg=11342.80, stdev=1245.58 00:40:25.627 lat (usec): min=7251, max=48212, avg=11354.25, stdev=1245.48 00:40:25.627 clat percentiles (usec): 00:40:25.627 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:40:25.627 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:40:25.627 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:40:25.627 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14091], 99.95th=[46400], 00:40:25.627 | 99.99th=[47973] 00:40:25.627 bw ( KiB/s): min=33024, max=34560, per=33.12%, avg=33894.40, stdev=410.27, samples=20 00:40:25.627 iops : min= 258, max= 270, avg=264.80, stdev= 3.21, samples=20 00:40:25.627 lat (msec) : 10=3.81%, 20=96.11%, 50=0.08% 00:40:25.627 cpu : usr=94.66%, sys=5.00%, ctx=17, majf=0, minf=19 00:40:25.627 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:25.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:25.627 issued rwts: total=2650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:25.627 latency : target=0, window=0, percentile=100.00%, depth=3 00:40:25.627 00:40:25.627 Run status group 0 (all jobs): 00:40:25.627 READ: bw=99.9MiB/s (105MB/s), 33.0MiB/s-33.8MiB/s (34.6MB/s-35.4MB/s), io=1004MiB (1053MB), run=10045-10047msec 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:25.627 00:40:25.627 real 0m11.106s 00:40:25.627 user 0m35.344s 00:40:25.627 sys 0m1.810s 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:25.627 00:21:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:40:25.627 ************************************ 00:40:25.627 END TEST fio_dif_digest 00:40:25.627 ************************************ 00:40:25.627 00:21:58 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:40:25.627 00:21:58 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:40:25.627 00:21:58 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:25.627 00:21:58 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:40:25.627 00:21:58 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:25.627 00:21:58 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:40:25.627 00:21:58 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:25.627 00:21:58 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:25.627 rmmod nvme_tcp 00:40:25.627 rmmod nvme_fabrics 00:40:25.627 rmmod nvme_keyring 00:40:25.627 00:21:58 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:25.627 00:21:58 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:40:25.627 00:21:58 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:40:25.627 00:21:58 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 625381 ']' 00:40:25.627 00:21:58 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 625381 00:40:25.627 00:21:58 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 625381 ']' 00:40:25.627 00:21:58 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 625381 00:40:25.627 00:21:58 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:40:25.627 00:21:58 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:25.627 00:21:58 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 625381 00:40:25.627 00:21:58 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:25.627 00:21:58 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:25.627 00:21:58 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 625381' 00:40:25.627 killing process with pid 625381 00:40:25.627 00:21:58 nvmf_dif -- common/autotest_common.sh@973 -- # kill 625381 00:40:25.627 00:21:58 nvmf_dif -- common/autotest_common.sh@978 -- # wait 625381 00:40:25.627 00:21:59 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:40:25.627 00:21:59 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:40:27.002 Waiting for block devices as requested 00:40:27.002 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:40:27.002 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:27.260 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:27.260 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:27.260 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:27.519 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:27.519 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:27.519 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:27.519 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:27.777 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:27.777 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:27.777 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:28.037 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:28.037 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:28.037 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:28.296 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:28.296 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:28.296 00:22:03 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:28.296 00:22:03 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:28.296 00:22:03 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:40:28.296 00:22:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:40:28.296 00:22:03 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:28.296 00:22:03 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:40:28.296 00:22:03 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:28.296 00:22:03 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:28.296 00:22:03 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:28.296 00:22:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:28.296 00:22:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.828 00:22:05 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:30.828 00:40:30.828 real 1m14.292s 00:40:30.828 user 7m8.680s 00:40:30.828 sys 0m21.000s 00:40:30.828 00:22:05 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:30.828 00:22:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:30.828 ************************************ 00:40:30.828 END TEST nvmf_dif 00:40:30.828 ************************************ 00:40:30.828 00:22:05 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:30.828 00:22:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:30.828 00:22:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:30.828 00:22:05 -- common/autotest_common.sh@10 -- # set +x 00:40:30.828 ************************************ 00:40:30.828 START TEST nvmf_abort_qd_sizes 00:40:30.828 ************************************ 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort_qd_sizes.sh 00:40:30.828 * Looking for test storage... 00:40:30.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:30.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.828 --rc genhtml_branch_coverage=1 00:40:30.828 --rc genhtml_function_coverage=1 00:40:30.828 --rc genhtml_legend=1 00:40:30.828 --rc geninfo_all_blocks=1 00:40:30.828 --rc geninfo_unexecuted_blocks=1 00:40:30.828 00:40:30.828 ' 00:40:30.828 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:30.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.828 --rc genhtml_branch_coverage=1 00:40:30.828 --rc genhtml_function_coverage=1 00:40:30.828 --rc genhtml_legend=1 00:40:30.828 --rc geninfo_all_blocks=1 00:40:30.828 --rc geninfo_unexecuted_blocks=1 00:40:30.828 00:40:30.828 ' 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:30.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.829 --rc genhtml_branch_coverage=1 00:40:30.829 --rc genhtml_function_coverage=1 00:40:30.829 --rc genhtml_legend=1 00:40:30.829 --rc geninfo_all_blocks=1 00:40:30.829 --rc geninfo_unexecuted_blocks=1 00:40:30.829 00:40:30.829 ' 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:30.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:30.829 --rc genhtml_branch_coverage=1 00:40:30.829 --rc genhtml_function_coverage=1 00:40:30.829 --rc genhtml_legend=1 00:40:30.829 --rc geninfo_all_blocks=1 00:40:30.829 --rc geninfo_unexecuted_blocks=1 00:40:30.829 00:40:30.829 ' 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:30.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:40:30.829 00:22:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:40:37.397 Found 0000:86:00.0 (0x8086 - 0x159b) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:40:37.397 Found 0000:86:00.1 (0x8086 - 0x159b) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:40:37.397 Found net devices under 0000:86:00.0: cvl_0_0 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:37.397 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:40:37.398 Found net devices under 0000:86:00.1: cvl_0_1 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:37.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:37.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:40:37.398 00:40:37.398 --- 10.0.0.2 ping statistics --- 00:40:37.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:37.398 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:37.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:37.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:40:37.398 00:40:37.398 --- 10.0.0.1 ping statistics --- 00:40:37.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:37.398 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:40:37.398 00:22:11 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:40:39.304 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:40:39.304 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:40:40.240 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=642295 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 642295 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 642295 ']' 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:40.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:40.498 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:40.498 [2024-12-10 00:22:15.297414] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:40:40.498 [2024-12-10 00:22:15.297466] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:40.498 [2024-12-10 00:22:15.377926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:40.498 [2024-12-10 00:22:15.420903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:40.498 [2024-12-10 00:22:15.420940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:40.498 [2024-12-10 00:22:15.420947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:40.498 [2024-12-10 00:22:15.420953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:40.498 [2024-12-10 00:22:15.420958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:40.498 [2024-12-10 00:22:15.422487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:40.498 [2024-12-10 00:22:15.422596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:40.498 [2024-12-10 00:22:15.422706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.498 [2024-12-10 00:22:15.422707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:40.756 00:22:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:40.756 ************************************ 00:40:40.756 START TEST spdk_target_abort 00:40:40.756 ************************************ 00:40:40.756 00:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:40:40.756 00:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:40:40.756 00:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:40:40.756 00:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:40.756 00:22:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:44.034 spdk_targetn1 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:44.034 [2024-12-10 00:22:18.444103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:44.034 [2024-12-10 00:22:18.508487] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:40:44.034 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:44.035 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:40:44.035 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:44.035 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:44.035 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:44.035 00:22:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:47.310 Initializing NVMe Controllers 00:40:47.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:47.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:47.310 Initialization complete. Launching workers. 00:40:47.310 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16550, failed: 0 00:40:47.310 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1369, failed to submit 15181 00:40:47.310 success 724, unsuccessful 645, failed 0 00:40:47.310 00:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:47.310 00:22:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:50.586 Initializing NVMe Controllers 00:40:50.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:50.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:50.586 Initialization complete. Launching workers. 00:40:50.586 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8552, failed: 0 00:40:50.586 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7305 00:40:50.586 success 343, unsuccessful 904, failed 0 00:40:50.586 00:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:50.586 00:22:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:53.861 Initializing NVMe Controllers 00:40:53.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:40:53.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:40:53.861 Initialization complete. Launching workers. 00:40:53.861 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38018, failed: 0 00:40:53.861 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2713, failed to submit 35305 00:40:53.861 success 596, unsuccessful 2117, failed 0 00:40:53.861 00:22:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:40:53.861 00:22:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.861 00:22:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:53.861 00:22:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:53.861 00:22:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:40:53.862 00:22:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:53.862 00:22:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 642295 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 642295 ']' 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 642295 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 642295 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 642295' 00:40:55.234 killing process with pid 642295 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 642295 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 642295 00:40:55.234 00:40:55.234 real 0m14.346s 00:40:55.234 user 0m54.698s 00:40:55.234 sys 0m2.611s 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:55.234 00:22:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:40:55.234 ************************************ 00:40:55.234 END TEST spdk_target_abort 00:40:55.234 ************************************ 00:40:55.234 00:22:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:40:55.234 00:22:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:55.234 00:22:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:55.234 00:22:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:55.234 ************************************ 00:40:55.234 START TEST kernel_target_abort 00:40:55.234 ************************************ 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:40:55.234 00:22:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:40:58.524 Waiting for block devices as requested 00:40:58.524 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:40:58.524 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:58.524 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:58.524 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:58.524 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:58.524 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:58.524 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:58.524 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:58.524 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:58.783 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:40:58.783 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:40:58.783 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:40:59.042 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:40:59.042 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:40:59.042 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:40:59.042 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:40:59.301 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:40:59.301 No valid GPT data, bailing 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:40:59.301 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:40:59.560 00:40:59.560 Discovery Log Number of Records 2, Generation counter 2 00:40:59.560 =====Discovery Log Entry 0====== 00:40:59.560 trtype: tcp 00:40:59.560 adrfam: ipv4 00:40:59.560 subtype: current discovery subsystem 00:40:59.560 treq: not specified, sq flow control disable supported 00:40:59.560 portid: 1 00:40:59.560 trsvcid: 4420 00:40:59.560 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:40:59.560 traddr: 10.0.0.1 00:40:59.560 eflags: none 00:40:59.560 sectype: none 00:40:59.560 =====Discovery Log Entry 1====== 00:40:59.560 trtype: tcp 00:40:59.560 adrfam: ipv4 00:40:59.560 subtype: nvme subsystem 00:40:59.560 treq: not specified, sq flow control disable supported 00:40:59.560 portid: 1 00:40:59.560 trsvcid: 4420 00:40:59.560 subnqn: nqn.2016-06.io.spdk:testnqn 00:40:59.560 traddr: 10.0.0.1 00:40:59.560 eflags: none 00:40:59.560 sectype: none 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:40:59.560 00:22:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:02.842 Initializing NVMe Controllers 00:41:02.842 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:02.842 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:02.842 Initialization complete. Launching workers. 00:41:02.842 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92887, failed: 0 00:41:02.842 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92887, failed to submit 0 00:41:02.842 success 0, unsuccessful 92887, failed 0 00:41:02.842 00:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:02.842 00:22:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:06.134 Initializing NVMe Controllers 00:41:06.134 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:06.134 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:06.134 Initialization complete. Launching workers. 00:41:06.134 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145467, failed: 0 00:41:06.134 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36458, failed to submit 109009 00:41:06.134 success 0, unsuccessful 36458, failed 0 00:41:06.134 00:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:41:06.134 00:22:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:41:09.415 Initializing NVMe Controllers 00:41:09.415 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:41:09.415 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:41:09.415 Initialization complete. Launching workers. 00:41:09.415 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 137745, failed: 0 00:41:09.415 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34494, failed to submit 103251 00:41:09.415 success 0, unsuccessful 34494, failed 0 00:41:09.415 00:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:41:09.415 00:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:41:09.415 00:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:41:09.415 00:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:09.415 00:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:41:09.415 00:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:41:09.415 00:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:41:09.415 00:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:41:09.415 00:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:41:09.415 00:22:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:41:11.952 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:41:11.952 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:41:12.519 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:41:12.777 00:41:12.777 real 0m17.523s 00:41:12.777 user 0m9.143s 00:41:12.777 sys 0m5.032s 00:41:12.777 00:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:12.777 00:22:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:41:12.777 ************************************ 00:41:12.777 END TEST kernel_target_abort 00:41:12.777 ************************************ 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:12.777 rmmod nvme_tcp 00:41:12.777 rmmod nvme_fabrics 00:41:12.777 rmmod nvme_keyring 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 642295 ']' 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 642295 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 642295 ']' 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 642295 00:41:12.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (642295) - No such process 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 642295 is not found' 00:41:12.777 Process with pid 642295 is not found 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:41:12.777 00:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:41:16.063 Waiting for block devices as requested 00:41:16.063 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:41:16.063 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:16.063 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:16.063 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:16.063 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:16.063 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:16.063 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:16.063 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:16.322 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:16.322 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:41:16.322 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:41:16.580 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:41:16.580 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:41:16.580 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:41:16.580 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:41:16.838 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:41:16.838 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:41:16.838 00:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:16.838 00:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:16.838 00:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:41:16.838 00:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:41:16.838 00:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:16.838 00:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:41:16.838 00:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:16.838 00:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:16.838 00:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:16.839 00:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:16.839 00:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:19.372 00:22:53 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:19.372 00:41:19.372 real 0m48.513s 00:41:19.372 user 1m8.148s 00:41:19.372 sys 0m16.342s 00:41:19.372 00:22:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:19.372 00:22:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:19.372 ************************************ 00:41:19.372 END TEST nvmf_abort_qd_sizes 00:41:19.373 ************************************ 00:41:19.373 00:22:53 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/file.sh 00:41:19.373 00:22:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:19.373 00:22:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:19.373 00:22:53 -- common/autotest_common.sh@10 -- # set +x 00:41:19.373 ************************************ 00:41:19.373 START TEST keyring_file 00:41:19.373 ************************************ 00:41:19.373 00:22:53 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/file.sh 00:41:19.373 * Looking for test storage... 00:41:19.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring 00:41:19.373 00:22:53 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:19.373 00:22:53 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:41:19.373 00:22:53 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:19.373 00:22:54 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@345 -- # : 1 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@353 -- # local d=1 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@355 -- # echo 1 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@353 -- # local d=2 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@355 -- # echo 2 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@368 -- # return 0 00:41:19.373 00:22:54 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:19.373 00:22:54 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:19.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.373 --rc genhtml_branch_coverage=1 00:41:19.373 --rc genhtml_function_coverage=1 00:41:19.373 --rc genhtml_legend=1 00:41:19.373 --rc geninfo_all_blocks=1 00:41:19.373 --rc geninfo_unexecuted_blocks=1 00:41:19.373 00:41:19.373 ' 00:41:19.373 00:22:54 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:19.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.373 --rc genhtml_branch_coverage=1 00:41:19.373 --rc genhtml_function_coverage=1 00:41:19.373 --rc genhtml_legend=1 00:41:19.373 --rc geninfo_all_blocks=1 00:41:19.373 --rc geninfo_unexecuted_blocks=1 00:41:19.373 00:41:19.373 ' 00:41:19.373 00:22:54 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:19.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.373 --rc genhtml_branch_coverage=1 00:41:19.373 --rc genhtml_function_coverage=1 00:41:19.373 --rc genhtml_legend=1 00:41:19.373 --rc geninfo_all_blocks=1 00:41:19.373 --rc geninfo_unexecuted_blocks=1 00:41:19.373 00:41:19.373 ' 00:41:19.373 00:22:54 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:19.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:19.373 --rc genhtml_branch_coverage=1 00:41:19.373 --rc genhtml_function_coverage=1 00:41:19.373 --rc genhtml_legend=1 00:41:19.373 --rc geninfo_all_blocks=1 00:41:19.373 --rc geninfo_unexecuted_blocks=1 00:41:19.373 00:41:19.373 ' 00:41:19.373 00:22:54 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/common.sh 00:41:19.373 00:22:54 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:19.373 00:22:54 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:19.373 00:22:54 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.373 00:22:54 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.373 00:22:54 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.373 00:22:54 keyring_file -- paths/export.sh@5 -- # export PATH 00:41:19.373 00:22:54 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@51 -- # : 0 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:19.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:19.373 00:22:54 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:19.373 00:22:54 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:19.373 00:22:54 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:19.373 00:22:54 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:19.373 00:22:54 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:41:19.373 00:22:54 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:41:19.373 00:22:54 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:41:19.373 00:22:54 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:19.373 00:22:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:19.373 00:22:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:19.373 00:22:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:19.373 00:22:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:19.373 00:22:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FvyS0VT4B9 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FvyS0VT4B9 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FvyS0VT4B9 00:41:19.374 00:22:54 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.FvyS0VT4B9 00:41:19.374 00:22:54 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@17 -- # name=key1 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5pKJSlPFNR 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:19.374 00:22:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5pKJSlPFNR 00:41:19.374 00:22:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5pKJSlPFNR 00:41:19.374 00:22:54 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5pKJSlPFNR 00:41:19.374 00:22:54 keyring_file -- keyring/file.sh@30 -- # tgtpid=651070 00:41:19.374 00:22:54 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:41:19.374 00:22:54 keyring_file -- keyring/file.sh@32 -- # waitforlisten 651070 00:41:19.374 00:22:54 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 651070 ']' 00:41:19.374 00:22:54 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:19.374 00:22:54 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:19.374 00:22:54 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:19.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:19.374 00:22:54 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:19.374 00:22:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:19.374 [2024-12-10 00:22:54.223897] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:41:19.374 [2024-12-10 00:22:54.223948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651070 ] 00:41:19.374 [2024-12-10 00:22:54.301023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.632 [2024-12-10 00:22:54.342331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:41:20.199 00:22:55 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:20.199 [2024-12-10 00:22:55.050370] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:20.199 null0 00:41:20.199 [2024-12-10 00:22:55.082418] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:20.199 [2024-12-10 00:22:55.082719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.199 00:22:55 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:20.199 [2024-12-10 00:22:55.114495] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:41:20.199 request: 00:41:20.199 { 00:41:20.199 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:41:20.199 "secure_channel": false, 00:41:20.199 "listen_address": { 00:41:20.199 "trtype": "tcp", 00:41:20.199 "traddr": "127.0.0.1", 00:41:20.199 "trsvcid": "4420" 00:41:20.199 }, 00:41:20.199 "method": "nvmf_subsystem_add_listener", 00:41:20.199 "req_id": 1 00:41:20.199 } 00:41:20.199 Got JSON-RPC error response 00:41:20.199 response: 00:41:20.199 { 00:41:20.199 "code": -32602, 00:41:20.199 "message": "Invalid parameters" 00:41:20.199 } 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:20.199 00:22:55 keyring_file -- keyring/file.sh@47 -- # bperfpid=651213 00:41:20.199 00:22:55 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:41:20.199 00:22:55 keyring_file -- keyring/file.sh@49 -- # waitforlisten 651213 /var/tmp/bperf.sock 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 651213 ']' 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:20.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:20.199 00:22:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:20.457 [2024-12-10 00:22:55.166798] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:41:20.457 [2024-12-10 00:22:55.166842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651213 ] 00:41:20.457 [2024-12-10 00:22:55.244788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:20.457 [2024-12-10 00:22:55.286677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:20.457 00:22:55 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:20.457 00:22:55 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:41:20.457 00:22:55 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FvyS0VT4B9 00:41:20.457 00:22:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FvyS0VT4B9 00:41:20.715 00:22:55 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5pKJSlPFNR 00:41:20.715 00:22:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5pKJSlPFNR 00:41:20.973 00:22:55 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:41:20.973 00:22:55 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:41:20.973 00:22:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:20.973 00:22:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:20.973 00:22:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:21.231 00:22:55 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FvyS0VT4B9 == \/\t\m\p\/\t\m\p\.\F\v\y\S\0\V\T\4\B\9 ]] 00:41:21.231 00:22:55 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:41:21.231 00:22:55 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:41:21.231 00:22:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:21.231 00:22:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:21.231 00:22:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:21.489 00:22:56 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.5pKJSlPFNR == \/\t\m\p\/\t\m\p\.\5\p\K\J\S\l\P\F\N\R ]] 00:41:21.489 00:22:56 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:41:21.489 00:22:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:21.489 00:22:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:21.489 00:22:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:21.489 00:22:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:21.489 00:22:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:21.489 00:22:56 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:41:21.489 00:22:56 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:41:21.489 00:22:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:21.489 00:22:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:21.489 00:22:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:21.489 00:22:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:21.489 00:22:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:21.746 00:22:56 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:41:21.746 00:22:56 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:21.746 00:22:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:22.004 [2024-12-10 00:22:56.758756] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:22.004 nvme0n1 00:41:22.004 00:22:56 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:41:22.004 00:22:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:22.004 00:22:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:22.004 00:22:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:22.004 00:22:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:22.004 00:22:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:22.263 00:22:57 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:41:22.263 00:22:57 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:41:22.263 00:22:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:22.263 00:22:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:22.263 00:22:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:22.263 00:22:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:22.263 00:22:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:22.521 00:22:57 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:41:22.521 00:22:57 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:22.521 Running I/O for 1 seconds... 00:41:23.455 18913.00 IOPS, 73.88 MiB/s 00:41:23.455 Latency(us) 00:41:23.455 [2024-12-09T23:22:58.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:23.455 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:41:23.455 nvme0n1 : 1.00 18957.67 74.05 0.00 0.00 6738.66 3034.60 10827.69 00:41:23.455 [2024-12-09T23:22:58.391Z] =================================================================================================================== 00:41:23.455 [2024-12-09T23:22:58.391Z] Total : 18957.67 74.05 0.00 0.00 6738.66 3034.60 10827.69 00:41:23.455 { 00:41:23.455 "results": [ 00:41:23.455 { 00:41:23.455 "job": "nvme0n1", 00:41:23.455 "core_mask": "0x2", 00:41:23.455 "workload": "randrw", 00:41:23.455 "percentage": 50, 00:41:23.455 "status": "finished", 00:41:23.455 "queue_depth": 128, 00:41:23.455 "io_size": 4096, 00:41:23.455 "runtime": 1.004501, 00:41:23.455 "iops": 18957.671520486292, 00:41:23.455 "mibps": 74.05340437689958, 00:41:23.455 "io_failed": 0, 00:41:23.455 "io_timeout": 0, 00:41:23.455 "avg_latency_us": 6738.657812867446, 00:41:23.455 "min_latency_us": 3034.601739130435, 00:41:23.455 "max_latency_us": 10827.686956521738 00:41:23.455 } 00:41:23.455 ], 00:41:23.455 "core_count": 1 00:41:23.455 } 00:41:23.455 00:22:58 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:23.455 00:22:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:23.713 00:22:58 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:41:23.713 00:22:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:23.713 00:22:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:23.713 00:22:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:23.713 00:22:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:23.713 00:22:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:23.971 00:22:58 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:41:23.971 00:22:58 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:41:23.971 00:22:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:23.971 00:22:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:23.971 00:22:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:23.971 00:22:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:23.971 00:22:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:24.229 00:22:58 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:41:24.229 00:22:58 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:24.229 00:22:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:24.229 00:22:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:24.229 00:22:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:24.229 00:22:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:24.229 00:22:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:24.229 00:22:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:24.229 00:22:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:24.229 00:22:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:41:24.229 [2024-12-10 00:22:59.144753] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:24.229 [2024-12-10 00:22:59.145390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1275e30 (107): Transport endpoint is not connected 00:41:24.229 [2024-12-10 00:22:59.146385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1275e30 (9): Bad file descriptor 00:41:24.229 [2024-12-10 00:22:59.147387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:41:24.229 [2024-12-10 00:22:59.147404] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:24.229 [2024-12-10 00:22:59.147412] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:41:24.229 [2024-12-10 00:22:59.147421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:41:24.229 request: 00:41:24.229 { 00:41:24.229 "name": "nvme0", 00:41:24.229 "trtype": "tcp", 00:41:24.229 "traddr": "127.0.0.1", 00:41:24.229 "adrfam": "ipv4", 00:41:24.229 "trsvcid": "4420", 00:41:24.229 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:24.229 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:24.229 "prchk_reftag": false, 00:41:24.229 "prchk_guard": false, 00:41:24.229 "hdgst": false, 00:41:24.229 "ddgst": false, 00:41:24.229 "psk": "key1", 00:41:24.229 "allow_unrecognized_csi": false, 00:41:24.229 "method": "bdev_nvme_attach_controller", 00:41:24.229 "req_id": 1 00:41:24.229 } 00:41:24.229 Got JSON-RPC error response 00:41:24.229 response: 00:41:24.229 { 00:41:24.229 "code": -5, 00:41:24.229 "message": "Input/output error" 00:41:24.229 } 00:41:24.487 00:22:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:24.487 00:22:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:24.487 00:22:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:24.487 00:22:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:24.487 00:22:59 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:41:24.487 00:22:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:24.487 00:22:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:24.487 00:22:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:24.487 00:22:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:24.487 00:22:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:24.487 00:22:59 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:41:24.487 00:22:59 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:41:24.487 00:22:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:24.487 00:22:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:24.487 00:22:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:24.487 00:22:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:24.487 00:22:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:24.746 00:22:59 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:41:24.746 00:22:59 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:41:24.746 00:22:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:25.004 00:22:59 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:41:25.004 00:22:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:41:25.261 00:22:59 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:41:25.261 00:22:59 keyring_file -- keyring/file.sh@78 -- # jq length 00:41:25.261 00:22:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:25.261 00:23:00 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:41:25.261 00:23:00 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.FvyS0VT4B9 00:41:25.261 00:23:00 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.FvyS0VT4B9 00:41:25.261 00:23:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:25.261 00:23:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.FvyS0VT4B9 00:41:25.261 00:23:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:25.261 00:23:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:25.261 00:23:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:25.261 00:23:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:25.261 00:23:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FvyS0VT4B9 00:41:25.261 00:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FvyS0VT4B9 00:41:25.519 [2024-12-10 00:23:00.357582] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.FvyS0VT4B9': 0100660 00:41:25.519 [2024-12-10 00:23:00.357610] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:41:25.519 request: 00:41:25.519 { 00:41:25.519 "name": "key0", 00:41:25.519 "path": "/tmp/tmp.FvyS0VT4B9", 00:41:25.519 "method": "keyring_file_add_key", 00:41:25.519 "req_id": 1 00:41:25.519 } 00:41:25.519 Got JSON-RPC error response 00:41:25.519 response: 00:41:25.519 { 00:41:25.519 "code": -1, 00:41:25.519 "message": "Operation not permitted" 00:41:25.519 } 00:41:25.519 00:23:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:25.519 00:23:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:25.519 00:23:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:25.519 00:23:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:25.519 00:23:00 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.FvyS0VT4B9 00:41:25.519 00:23:00 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FvyS0VT4B9 00:41:25.519 00:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FvyS0VT4B9 00:41:25.778 00:23:00 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.FvyS0VT4B9 00:41:25.778 00:23:00 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:41:25.778 00:23:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:25.778 00:23:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:25.778 00:23:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:25.778 00:23:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:25.778 00:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:26.036 00:23:00 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:41:26.036 00:23:00 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:26.036 00:23:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:41:26.036 00:23:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:26.036 00:23:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:26.036 00:23:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:26.036 00:23:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:26.036 00:23:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:26.036 00:23:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:26.036 00:23:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:26.294 [2024-12-10 00:23:00.991259] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.FvyS0VT4B9': No such file or directory 00:41:26.294 [2024-12-10 00:23:00.991285] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:41:26.294 [2024-12-10 00:23:00.991302] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:41:26.294 [2024-12-10 00:23:00.991309] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:41:26.294 [2024-12-10 00:23:00.991327] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:41:26.294 [2024-12-10 00:23:00.991334] bdev_nvme.c:6795:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:41:26.294 request: 00:41:26.294 { 00:41:26.294 "name": "nvme0", 00:41:26.294 "trtype": "tcp", 00:41:26.294 "traddr": "127.0.0.1", 00:41:26.294 "adrfam": "ipv4", 00:41:26.294 "trsvcid": "4420", 00:41:26.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:26.294 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:26.294 "prchk_reftag": false, 00:41:26.294 "prchk_guard": false, 00:41:26.294 "hdgst": false, 00:41:26.294 "ddgst": false, 00:41:26.294 "psk": "key0", 00:41:26.294 "allow_unrecognized_csi": false, 00:41:26.294 "method": "bdev_nvme_attach_controller", 00:41:26.294 "req_id": 1 00:41:26.294 } 00:41:26.294 Got JSON-RPC error response 00:41:26.294 response: 00:41:26.294 { 00:41:26.294 "code": -19, 00:41:26.294 "message": "No such device" 00:41:26.294 } 00:41:26.294 00:23:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:41:26.294 00:23:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:26.294 00:23:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:26.294 00:23:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:26.294 00:23:01 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:41:26.294 00:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:26.294 00:23:01 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:41:26.294 00:23:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:41:26.294 00:23:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:41:26.294 00:23:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:26.294 00:23:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:41:26.294 00:23:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:41:26.294 00:23:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Bk0hjDAPql 00:41:26.294 00:23:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:26.294 00:23:01 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:26.294 00:23:01 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:41:26.294 00:23:01 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:26.294 00:23:01 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:26.294 00:23:01 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:41:26.294 00:23:01 keyring_file -- nvmf/common.sh@733 -- # python - 00:41:26.553 00:23:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Bk0hjDAPql 00:41:26.553 00:23:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Bk0hjDAPql 00:41:26.553 00:23:01 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Bk0hjDAPql 00:41:26.553 00:23:01 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Bk0hjDAPql 00:41:26.553 00:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Bk0hjDAPql 00:41:26.553 00:23:01 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:26.553 00:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:26.811 nvme0n1 00:41:26.811 00:23:01 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:41:26.811 00:23:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:26.811 00:23:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:26.811 00:23:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:26.811 00:23:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:26.811 00:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:27.069 00:23:01 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:41:27.069 00:23:01 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:41:27.069 00:23:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:41:27.327 00:23:02 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:41:27.327 00:23:02 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:41:27.327 00:23:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:27.327 00:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:27.327 00:23:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:27.585 00:23:02 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:41:27.585 00:23:02 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:41:27.585 00:23:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:27.585 00:23:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:27.585 00:23:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:27.585 00:23:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:27.585 00:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:27.842 00:23:02 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:41:27.842 00:23:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:27.842 00:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:27.842 00:23:02 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:41:27.842 00:23:02 keyring_file -- keyring/file.sh@105 -- # jq length 00:41:27.842 00:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:28.100 00:23:02 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:41:28.100 00:23:02 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Bk0hjDAPql 00:41:28.100 00:23:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Bk0hjDAPql 00:41:28.359 00:23:03 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5pKJSlPFNR 00:41:28.359 00:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5pKJSlPFNR 00:41:28.617 00:23:03 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:28.617 00:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:41:28.875 nvme0n1 00:41:28.875 00:23:03 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:41:28.875 00:23:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:41:29.134 00:23:03 keyring_file -- keyring/file.sh@113 -- # config='{ 00:41:29.134 "subsystems": [ 00:41:29.134 { 00:41:29.134 "subsystem": "keyring", 00:41:29.134 "config": [ 00:41:29.134 { 00:41:29.134 "method": "keyring_file_add_key", 00:41:29.134 "params": { 00:41:29.134 "name": "key0", 00:41:29.134 "path": "/tmp/tmp.Bk0hjDAPql" 00:41:29.134 } 00:41:29.134 }, 00:41:29.134 { 00:41:29.134 "method": "keyring_file_add_key", 00:41:29.134 "params": { 00:41:29.134 "name": "key1", 00:41:29.134 "path": "/tmp/tmp.5pKJSlPFNR" 00:41:29.134 } 00:41:29.134 } 00:41:29.134 ] 00:41:29.134 }, 00:41:29.134 { 00:41:29.134 "subsystem": "iobuf", 00:41:29.134 "config": [ 00:41:29.134 { 00:41:29.134 "method": "iobuf_set_options", 00:41:29.134 "params": { 00:41:29.134 "small_pool_count": 8192, 00:41:29.134 "large_pool_count": 1024, 00:41:29.134 "small_bufsize": 8192, 00:41:29.134 "large_bufsize": 135168, 00:41:29.134 "enable_numa": false 00:41:29.134 } 00:41:29.134 } 00:41:29.134 ] 00:41:29.134 }, 00:41:29.134 { 00:41:29.134 "subsystem": "sock", 00:41:29.134 "config": [ 00:41:29.134 { 00:41:29.134 "method": "sock_set_default_impl", 00:41:29.134 "params": { 00:41:29.134 "impl_name": "posix" 00:41:29.134 } 00:41:29.134 }, 00:41:29.134 { 00:41:29.134 "method": "sock_impl_set_options", 00:41:29.134 "params": { 00:41:29.134 "impl_name": "ssl", 00:41:29.134 "recv_buf_size": 4096, 00:41:29.134 "send_buf_size": 4096, 00:41:29.134 "enable_recv_pipe": true, 00:41:29.134 "enable_quickack": false, 00:41:29.134 "enable_placement_id": 0, 00:41:29.134 "enable_zerocopy_send_server": true, 00:41:29.134 "enable_zerocopy_send_client": false, 00:41:29.134 "zerocopy_threshold": 0, 00:41:29.134 "tls_version": 0, 00:41:29.134 "enable_ktls": false 00:41:29.134 } 00:41:29.134 }, 00:41:29.134 { 00:41:29.134 "method": "sock_impl_set_options", 00:41:29.134 "params": { 00:41:29.134 "impl_name": "posix", 00:41:29.134 "recv_buf_size": 2097152, 00:41:29.134 "send_buf_size": 2097152, 00:41:29.134 "enable_recv_pipe": true, 00:41:29.134 "enable_quickack": false, 00:41:29.134 "enable_placement_id": 0, 00:41:29.134 "enable_zerocopy_send_server": true, 00:41:29.134 "enable_zerocopy_send_client": false, 00:41:29.134 "zerocopy_threshold": 0, 00:41:29.134 "tls_version": 0, 00:41:29.134 "enable_ktls": false 00:41:29.134 } 00:41:29.134 } 00:41:29.134 ] 00:41:29.134 }, 00:41:29.134 { 00:41:29.134 "subsystem": "vmd", 00:41:29.134 "config": [] 00:41:29.134 }, 00:41:29.134 { 00:41:29.134 "subsystem": "accel", 00:41:29.134 "config": [ 00:41:29.134 { 00:41:29.134 "method": "accel_set_options", 00:41:29.134 "params": { 00:41:29.134 "small_cache_size": 128, 00:41:29.134 "large_cache_size": 16, 00:41:29.134 "task_count": 2048, 00:41:29.134 "sequence_count": 2048, 00:41:29.134 "buf_count": 2048 00:41:29.134 } 00:41:29.134 } 00:41:29.134 ] 00:41:29.134 }, 00:41:29.134 { 00:41:29.134 "subsystem": "bdev", 00:41:29.134 "config": [ 00:41:29.134 { 00:41:29.134 "method": "bdev_set_options", 00:41:29.134 "params": { 00:41:29.134 "bdev_io_pool_size": 65535, 00:41:29.134 "bdev_io_cache_size": 256, 00:41:29.134 "bdev_auto_examine": true, 00:41:29.134 "iobuf_small_cache_size": 128, 00:41:29.134 "iobuf_large_cache_size": 16 00:41:29.134 } 00:41:29.134 }, 00:41:29.134 { 00:41:29.134 "method": "bdev_raid_set_options", 00:41:29.134 "params": { 00:41:29.134 "process_window_size_kb": 1024, 00:41:29.134 "process_max_bandwidth_mb_sec": 0 00:41:29.134 } 00:41:29.134 }, 00:41:29.134 { 00:41:29.134 "method": "bdev_iscsi_set_options", 00:41:29.134 "params": { 00:41:29.134 "timeout_sec": 30 00:41:29.134 } 00:41:29.134 }, 00:41:29.134 { 00:41:29.134 "method": "bdev_nvme_set_options", 00:41:29.134 "params": { 00:41:29.134 "action_on_timeout": "none", 00:41:29.134 "timeout_us": 0, 00:41:29.134 "timeout_admin_us": 0, 00:41:29.134 "keep_alive_timeout_ms": 10000, 00:41:29.134 "arbitration_burst": 0, 00:41:29.134 "low_priority_weight": 0, 00:41:29.134 "medium_priority_weight": 0, 00:41:29.134 "high_priority_weight": 0, 00:41:29.134 "nvme_adminq_poll_period_us": 10000, 00:41:29.135 "nvme_ioq_poll_period_us": 0, 00:41:29.135 "io_queue_requests": 512, 00:41:29.135 "delay_cmd_submit": true, 00:41:29.135 "transport_retry_count": 4, 00:41:29.135 "bdev_retry_count": 3, 00:41:29.135 "transport_ack_timeout": 0, 00:41:29.135 "ctrlr_loss_timeout_sec": 0, 00:41:29.135 "reconnect_delay_sec": 0, 00:41:29.135 "fast_io_fail_timeout_sec": 0, 00:41:29.135 "disable_auto_failback": false, 00:41:29.135 "generate_uuids": false, 00:41:29.135 "transport_tos": 0, 00:41:29.135 "nvme_error_stat": false, 00:41:29.135 "rdma_srq_size": 0, 00:41:29.135 "io_path_stat": false, 00:41:29.135 "allow_accel_sequence": false, 00:41:29.135 "rdma_max_cq_size": 0, 00:41:29.135 "rdma_cm_event_timeout_ms": 0, 00:41:29.135 "dhchap_digests": [ 00:41:29.135 "sha256", 00:41:29.135 "sha384", 00:41:29.135 "sha512" 00:41:29.135 ], 00:41:29.135 "dhchap_dhgroups": [ 00:41:29.135 "null", 00:41:29.135 "ffdhe2048", 00:41:29.135 "ffdhe3072", 00:41:29.135 "ffdhe4096", 00:41:29.135 "ffdhe6144", 00:41:29.135 "ffdhe8192" 00:41:29.135 ], 00:41:29.135 "rdma_umr_per_io": false 00:41:29.135 } 00:41:29.135 }, 00:41:29.135 { 00:41:29.135 "method": "bdev_nvme_attach_controller", 00:41:29.135 "params": { 00:41:29.135 "name": "nvme0", 00:41:29.135 "trtype": "TCP", 00:41:29.135 "adrfam": "IPv4", 00:41:29.135 "traddr": "127.0.0.1", 00:41:29.135 "trsvcid": "4420", 00:41:29.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:29.135 "prchk_reftag": false, 00:41:29.135 "prchk_guard": false, 00:41:29.135 "ctrlr_loss_timeout_sec": 0, 00:41:29.135 "reconnect_delay_sec": 0, 00:41:29.135 "fast_io_fail_timeout_sec": 0, 00:41:29.135 "psk": "key0", 00:41:29.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:29.135 "hdgst": false, 00:41:29.135 "ddgst": false, 00:41:29.135 "multipath": "multipath" 00:41:29.135 } 00:41:29.135 }, 00:41:29.135 { 00:41:29.135 "method": "bdev_nvme_set_hotplug", 00:41:29.135 "params": { 00:41:29.135 "period_us": 100000, 00:41:29.135 "enable": false 00:41:29.135 } 00:41:29.135 }, 00:41:29.135 { 00:41:29.135 "method": "bdev_wait_for_examine" 00:41:29.135 } 00:41:29.135 ] 00:41:29.135 }, 00:41:29.135 { 00:41:29.135 "subsystem": "nbd", 00:41:29.135 "config": [] 00:41:29.135 } 00:41:29.135 ] 00:41:29.135 }' 00:41:29.135 00:23:03 keyring_file -- keyring/file.sh@115 -- # killprocess 651213 00:41:29.135 00:23:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 651213 ']' 00:41:29.135 00:23:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 651213 00:41:29.135 00:23:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:29.135 00:23:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:29.135 00:23:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 651213 00:41:29.135 00:23:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:29.135 00:23:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:29.135 00:23:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 651213' 00:41:29.135 killing process with pid 651213 00:41:29.135 00:23:03 keyring_file -- common/autotest_common.sh@973 -- # kill 651213 00:41:29.135 Received shutdown signal, test time was about 1.000000 seconds 00:41:29.135 00:41:29.135 Latency(us) 00:41:29.135 [2024-12-09T23:23:04.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:29.135 [2024-12-09T23:23:04.071Z] =================================================================================================================== 00:41:29.135 [2024-12-09T23:23:04.071Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:29.135 00:23:03 keyring_file -- common/autotest_common.sh@978 -- # wait 651213 00:41:29.393 00:23:04 keyring_file -- keyring/file.sh@118 -- # bperfpid=652818 00:41:29.393 00:23:04 keyring_file -- keyring/file.sh@120 -- # waitforlisten 652818 /var/tmp/bperf.sock 00:41:29.393 00:23:04 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 652818 ']' 00:41:29.393 00:23:04 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:29.393 00:23:04 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:41:29.393 00:23:04 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:29.393 00:23:04 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:29.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:29.393 00:23:04 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:41:29.393 "subsystems": [ 00:41:29.393 { 00:41:29.393 "subsystem": "keyring", 00:41:29.393 "config": [ 00:41:29.394 { 00:41:29.394 "method": "keyring_file_add_key", 00:41:29.394 "params": { 00:41:29.394 "name": "key0", 00:41:29.394 "path": "/tmp/tmp.Bk0hjDAPql" 00:41:29.394 } 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "method": "keyring_file_add_key", 00:41:29.394 "params": { 00:41:29.394 "name": "key1", 00:41:29.394 "path": "/tmp/tmp.5pKJSlPFNR" 00:41:29.394 } 00:41:29.394 } 00:41:29.394 ] 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "subsystem": "iobuf", 00:41:29.394 "config": [ 00:41:29.394 { 00:41:29.394 "method": "iobuf_set_options", 00:41:29.394 "params": { 00:41:29.394 "small_pool_count": 8192, 00:41:29.394 "large_pool_count": 1024, 00:41:29.394 "small_bufsize": 8192, 00:41:29.394 "large_bufsize": 135168, 00:41:29.394 "enable_numa": false 00:41:29.394 } 00:41:29.394 } 00:41:29.394 ] 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "subsystem": "sock", 00:41:29.394 "config": [ 00:41:29.394 { 00:41:29.394 "method": "sock_set_default_impl", 00:41:29.394 "params": { 00:41:29.394 "impl_name": "posix" 00:41:29.394 } 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "method": "sock_impl_set_options", 00:41:29.394 "params": { 00:41:29.394 "impl_name": "ssl", 00:41:29.394 "recv_buf_size": 4096, 00:41:29.394 "send_buf_size": 4096, 00:41:29.394 "enable_recv_pipe": true, 00:41:29.394 "enable_quickack": false, 00:41:29.394 "enable_placement_id": 0, 00:41:29.394 "enable_zerocopy_send_server": true, 00:41:29.394 "enable_zerocopy_send_client": false, 00:41:29.394 "zerocopy_threshold": 0, 00:41:29.394 "tls_version": 0, 00:41:29.394 "enable_ktls": false 00:41:29.394 } 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "method": "sock_impl_set_options", 00:41:29.394 "params": { 00:41:29.394 "impl_name": "posix", 00:41:29.394 "recv_buf_size": 2097152, 00:41:29.394 "send_buf_size": 2097152, 00:41:29.394 "enable_recv_pipe": true, 00:41:29.394 "enable_quickack": false, 00:41:29.394 "enable_placement_id": 0, 00:41:29.394 "enable_zerocopy_send_server": true, 00:41:29.394 "enable_zerocopy_send_client": false, 00:41:29.394 "zerocopy_threshold": 0, 00:41:29.394 "tls_version": 0, 00:41:29.394 "enable_ktls": false 00:41:29.394 } 00:41:29.394 } 00:41:29.394 ] 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "subsystem": "vmd", 00:41:29.394 "config": [] 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "subsystem": "accel", 00:41:29.394 "config": [ 00:41:29.394 { 00:41:29.394 "method": "accel_set_options", 00:41:29.394 "params": { 00:41:29.394 "small_cache_size": 128, 00:41:29.394 "large_cache_size": 16, 00:41:29.394 "task_count": 2048, 00:41:29.394 "sequence_count": 2048, 00:41:29.394 "buf_count": 2048 00:41:29.394 } 00:41:29.394 } 00:41:29.394 ] 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "subsystem": "bdev", 00:41:29.394 "config": [ 00:41:29.394 { 00:41:29.394 "method": "bdev_set_options", 00:41:29.394 "params": { 00:41:29.394 "bdev_io_pool_size": 65535, 00:41:29.394 "bdev_io_cache_size": 256, 00:41:29.394 "bdev_auto_examine": true, 00:41:29.394 "iobuf_small_cache_size": 128, 00:41:29.394 "iobuf_large_cache_size": 16 00:41:29.394 } 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "method": "bdev_raid_set_options", 00:41:29.394 "params": { 00:41:29.394 "process_window_size_kb": 1024, 00:41:29.394 "process_max_bandwidth_mb_sec": 0 00:41:29.394 } 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "method": "bdev_iscsi_set_options", 00:41:29.394 "params": { 00:41:29.394 "timeout_sec": 30 00:41:29.394 } 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "method": "bdev_nvme_set_options", 00:41:29.394 "params": { 00:41:29.394 "action_on_timeout": "none", 00:41:29.394 "timeout_us": 0, 00:41:29.394 "timeout_admin_us": 0, 00:41:29.394 "keep_alive_timeout_ms": 10000, 00:41:29.394 "arbitration_burst": 0, 00:41:29.394 "low_priority_weight": 0, 00:41:29.394 "medium_priority_weight": 0, 00:41:29.394 "high_priority_weight": 0, 00:41:29.394 "nvme_adminq_poll_period_us": 10000, 00:41:29.394 "nvme_ioq_poll_period_us": 0, 00:41:29.394 "io_queue_requests": 512, 00:41:29.394 "delay_cmd_submit": true, 00:41:29.394 "transport_retry_count": 4, 00:41:29.394 "bdev_retry_count": 3, 00:41:29.394 "transport_ack_timeout": 0, 00:41:29.394 "ctrlr_loss_timeout_sec": 0, 00:41:29.394 "reconnect_delay_sec": 0, 00:41:29.394 "fast_io_fail_timeout_sec": 0, 00:41:29.394 "disable_auto_failback": false, 00:41:29.394 "generate_uuids": false, 00:41:29.394 "transport_tos": 0, 00:41:29.394 "nvme_error_stat": false, 00:41:29.394 "rdma_srq_size": 0, 00:41:29.394 "io_path_stat": false, 00:41:29.394 "allow_accel_sequence": false, 00:41:29.394 "rdma_max_cq_size": 0, 00:41:29.394 "rdma_cm_event_timeout_ms": 0, 00:41:29.394 "dhchap_digests": [ 00:41:29.394 "sha256", 00:41:29.394 "sha384", 00:41:29.394 "sha512" 00:41:29.394 ], 00:41:29.394 "dhchap_dhgroups": [ 00:41:29.394 "null", 00:41:29.394 "ffdhe2048", 00:41:29.394 "ffdhe3072", 00:41:29.394 "ffdhe4096", 00:41:29.394 "ffdhe6144", 00:41:29.394 "ffdhe8192" 00:41:29.394 ], 00:41:29.394 "rdma_umr_per_io": false 00:41:29.394 } 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "method": "bdev_nvme_attach_controller", 00:41:29.394 "params": { 00:41:29.394 "name": "nvme0", 00:41:29.394 "trtype": "TCP", 00:41:29.394 "adrfam": "IPv4", 00:41:29.394 "traddr": "127.0.0.1", 00:41:29.394 "trsvcid": "4420", 00:41:29.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:29.394 "prchk_reftag": false, 00:41:29.394 "prchk_guard": false, 00:41:29.394 "ctrlr_loss_timeout_sec": 0, 00:41:29.394 "reconnect_delay_sec": 0, 00:41:29.394 "fast_io_fail_timeout_sec": 0, 00:41:29.394 "psk": "key0", 00:41:29.394 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:29.394 "hdgst": false, 00:41:29.394 "ddgst": false, 00:41:29.394 "multipath": "multipath" 00:41:29.394 } 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "method": "bdev_nvme_set_hotplug", 00:41:29.394 "params": { 00:41:29.394 "period_us": 100000, 00:41:29.394 "enable": false 00:41:29.394 } 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "method": "bdev_wait_for_examine" 00:41:29.394 } 00:41:29.394 ] 00:41:29.394 }, 00:41:29.394 { 00:41:29.394 "subsystem": "nbd", 00:41:29.394 "config": [] 00:41:29.394 } 00:41:29.394 ] 00:41:29.394 }' 00:41:29.394 00:23:04 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:29.394 00:23:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:29.394 [2024-12-10 00:23:04.121615] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:41:29.394 [2024-12-10 00:23:04.121664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid652818 ] 00:41:29.394 [2024-12-10 00:23:04.197243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:29.394 [2024-12-10 00:23:04.238123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:29.652 [2024-12-10 00:23:04.399697] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:30.219 00:23:04 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:30.219 00:23:04 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:41:30.219 00:23:04 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:41:30.219 00:23:04 keyring_file -- keyring/file.sh@121 -- # jq length 00:41:30.219 00:23:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:30.477 00:23:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:41:30.477 00:23:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:41:30.477 00:23:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:41:30.477 00:23:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:30.477 00:23:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:30.477 00:23:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:30.477 00:23:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:41:30.477 00:23:05 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:41:30.477 00:23:05 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:41:30.477 00:23:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:41:30.477 00:23:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:41:30.477 00:23:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:41:30.477 00:23:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:30.477 00:23:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:30.735 00:23:05 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:41:30.735 00:23:05 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:41:30.735 00:23:05 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:41:30.735 00:23:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:41:30.994 00:23:05 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:41:30.994 00:23:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:41:30.994 00:23:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Bk0hjDAPql /tmp/tmp.5pKJSlPFNR 00:41:30.994 00:23:05 keyring_file -- keyring/file.sh@20 -- # killprocess 652818 00:41:30.994 00:23:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 652818 ']' 00:41:30.994 00:23:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 652818 00:41:30.994 00:23:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:30.994 00:23:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:30.994 00:23:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 652818 00:41:30.994 00:23:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:30.994 00:23:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:30.994 00:23:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 652818' 00:41:30.994 killing process with pid 652818 00:41:30.994 00:23:05 keyring_file -- common/autotest_common.sh@973 -- # kill 652818 00:41:30.994 Received shutdown signal, test time was about 1.000000 seconds 00:41:30.994 00:41:30.994 Latency(us) 00:41:30.994 [2024-12-09T23:23:05.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:30.994 [2024-12-09T23:23:05.930Z] =================================================================================================================== 00:41:30.994 [2024-12-09T23:23:05.930Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:41:30.994 00:23:05 keyring_file -- common/autotest_common.sh@978 -- # wait 652818 00:41:31.253 00:23:06 keyring_file -- keyring/file.sh@21 -- # killprocess 651070 00:41:31.253 00:23:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 651070 ']' 00:41:31.253 00:23:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 651070 00:41:31.253 00:23:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:41:31.253 00:23:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:31.253 00:23:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 651070 00:41:31.253 00:23:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:31.253 00:23:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:31.253 00:23:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 651070' 00:41:31.253 killing process with pid 651070 00:41:31.253 00:23:06 keyring_file -- common/autotest_common.sh@973 -- # kill 651070 00:41:31.253 00:23:06 keyring_file -- common/autotest_common.sh@978 -- # wait 651070 00:41:31.511 00:41:31.512 real 0m12.487s 00:41:31.512 user 0m30.522s 00:41:31.512 sys 0m2.698s 00:41:31.512 00:23:06 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:31.512 00:23:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:41:31.512 ************************************ 00:41:31.512 END TEST keyring_file 00:41:31.512 ************************************ 00:41:31.512 00:23:06 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:41:31.512 00:23:06 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/linux.sh 00:41:31.512 00:23:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:31.512 00:23:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:31.512 00:23:06 -- common/autotest_common.sh@10 -- # set +x 00:41:31.512 ************************************ 00:41:31.512 START TEST keyring_linux 00:41:31.512 ************************************ 00:41:31.512 00:23:06 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/linux.sh 00:41:31.512 Joined session keyring: 825972924 00:41:31.771 * Looking for test storage... 00:41:31.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring 00:41:31.771 00:23:06 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:31.771 00:23:06 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:31.771 00:23:06 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:41:31.771 00:23:06 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@345 -- # : 1 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:31.771 00:23:06 keyring_linux -- scripts/common.sh@368 -- # return 0 00:41:31.771 00:23:06 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:31.771 00:23:06 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:31.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.771 --rc genhtml_branch_coverage=1 00:41:31.771 --rc genhtml_function_coverage=1 00:41:31.771 --rc genhtml_legend=1 00:41:31.771 --rc geninfo_all_blocks=1 00:41:31.771 --rc geninfo_unexecuted_blocks=1 00:41:31.771 00:41:31.771 ' 00:41:31.772 00:23:06 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.772 --rc genhtml_branch_coverage=1 00:41:31.772 --rc genhtml_function_coverage=1 00:41:31.772 --rc genhtml_legend=1 00:41:31.772 --rc geninfo_all_blocks=1 00:41:31.772 --rc geninfo_unexecuted_blocks=1 00:41:31.772 00:41:31.772 ' 00:41:31.772 00:23:06 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.772 --rc genhtml_branch_coverage=1 00:41:31.772 --rc genhtml_function_coverage=1 00:41:31.772 --rc genhtml_legend=1 00:41:31.772 --rc geninfo_all_blocks=1 00:41:31.772 --rc geninfo_unexecuted_blocks=1 00:41:31.772 00:41:31.772 ' 00:41:31.772 00:23:06 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:31.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.772 --rc genhtml_branch_coverage=1 00:41:31.772 --rc genhtml_function_coverage=1 00:41:31.772 --rc genhtml_legend=1 00:41:31.772 --rc geninfo_all_blocks=1 00:41:31.772 --rc geninfo_unexecuted_blocks=1 00:41:31.772 00:41:31.772 ' 00:41:31.772 00:23:06 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/common.sh 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:41:31.772 00:23:06 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:41:31.772 00:23:06 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:31.772 00:23:06 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:31.772 00:23:06 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:31.772 00:23:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.772 00:23:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.772 00:23:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.772 00:23:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:41:31.772 00:23:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:31.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:41:31.772 00:23:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:41:31.772 00:23:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:41:31.772 00:23:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:41:31.772 00:23:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:41:31.772 00:23:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:41:31.772 00:23:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@733 -- # python - 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:41:31.772 /tmp/:spdk-test:key0 00:41:31.772 00:23:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:41:31.772 00:23:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:41:31.772 00:23:06 keyring_linux -- nvmf/common.sh@733 -- # python - 00:41:32.031 00:23:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:41:32.031 00:23:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:41:32.031 /tmp/:spdk-test:key1 00:41:32.031 00:23:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=653283 00:41:32.031 00:23:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 653283 00:41:32.031 00:23:06 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:41:32.031 00:23:06 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 653283 ']' 00:41:32.031 00:23:06 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:32.031 00:23:06 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:32.031 00:23:06 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:32.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:32.031 00:23:06 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:32.031 00:23:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:32.031 [2024-12-10 00:23:06.763221] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:41:32.031 [2024-12-10 00:23:06.763272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653283 ] 00:41:32.031 [2024-12-10 00:23:06.835926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.031 [2024-12-10 00:23:06.876726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:41:32.967 00:23:07 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:32.967 [2024-12-10 00:23:07.580883] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:32.967 null0 00:41:32.967 [2024-12-10 00:23:07.612934] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:41:32.967 [2024-12-10 00:23:07.613291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.967 00:23:07 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:41:32.967 130379202 00:41:32.967 00:23:07 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:41:32.967 592362677 00:41:32.967 00:23:07 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=653400 00:41:32.967 00:23:07 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:41:32.967 00:23:07 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 653400 /var/tmp/bperf.sock 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 653400 ']' 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:41:32.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:32.967 [2024-12-10 00:23:07.686985] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:41:32.967 [2024-12-10 00:23:07.687034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid653400 ] 00:41:32.967 [2024-12-10 00:23:07.762392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.967 [2024-12-10 00:23:07.803682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:32.967 00:23:07 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:41:32.967 00:23:07 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:41:32.967 00:23:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:41:33.224 00:23:08 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:41:33.224 00:23:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:41:33.482 00:23:08 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:33.482 00:23:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:41:33.739 [2024-12-10 00:23:08.480767] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:41:33.739 nvme0n1 00:41:33.739 00:23:08 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:41:33.739 00:23:08 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:41:33.739 00:23:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:33.739 00:23:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:33.739 00:23:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:33.739 00:23:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:33.997 00:23:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:41:33.997 00:23:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:33.997 00:23:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:41:33.997 00:23:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:41:33.997 00:23:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:41:33.997 00:23:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:41:33.997 00:23:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:34.255 00:23:08 keyring_linux -- keyring/linux.sh@25 -- # sn=130379202 00:41:34.255 00:23:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:41:34.255 00:23:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:34.255 00:23:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 130379202 == \1\3\0\3\7\9\2\0\2 ]] 00:41:34.255 00:23:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 130379202 00:41:34.255 00:23:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:41:34.255 00:23:08 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:41:34.255 Running I/O for 1 seconds... 00:41:35.189 21082.00 IOPS, 82.35 MiB/s 00:41:35.189 Latency(us) 00:41:35.189 [2024-12-09T23:23:10.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:35.189 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:41:35.189 nvme0n1 : 1.01 21081.24 82.35 0.00 0.00 6051.59 1994.57 8263.23 00:41:35.189 [2024-12-09T23:23:10.125Z] =================================================================================================================== 00:41:35.189 [2024-12-09T23:23:10.125Z] Total : 21081.24 82.35 0.00 0.00 6051.59 1994.57 8263.23 00:41:35.189 { 00:41:35.189 "results": [ 00:41:35.189 { 00:41:35.189 "job": "nvme0n1", 00:41:35.189 "core_mask": "0x2", 00:41:35.189 "workload": "randread", 00:41:35.189 "status": "finished", 00:41:35.189 "queue_depth": 128, 00:41:35.189 "io_size": 4096, 00:41:35.189 "runtime": 1.006108, 00:41:35.189 "iops": 21081.235811662365, 00:41:35.189 "mibps": 82.34857738930612, 00:41:35.189 "io_failed": 0, 00:41:35.190 "io_timeout": 0, 00:41:35.190 "avg_latency_us": 6051.586891499088, 00:41:35.190 "min_latency_us": 1994.5739130434783, 00:41:35.190 "max_latency_us": 8263.234782608695 00:41:35.190 } 00:41:35.190 ], 00:41:35.190 "core_count": 1 00:41:35.190 } 00:41:35.190 00:23:10 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:41:35.190 00:23:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:41:35.448 00:23:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:41:35.448 00:23:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:41:35.448 00:23:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:41:35.448 00:23:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:41:35.448 00:23:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:41:35.448 00:23:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:41:35.706 00:23:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:41:35.706 00:23:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:41:35.706 00:23:10 keyring_linux -- keyring/linux.sh@23 -- # return 00:41:35.706 00:23:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:35.706 00:23:10 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:41:35.706 00:23:10 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:35.706 00:23:10 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:41:35.706 00:23:10 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:35.706 00:23:10 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:41:35.706 00:23:10 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:35.706 00:23:10 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:35.706 00:23:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:41:35.965 [2024-12-10 00:23:10.684447] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:41:35.965 [2024-12-10 00:23:10.685169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f0bc0 (107): Transport endpoint is not connected 00:41:35.965 [2024-12-10 00:23:10.686161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f0bc0 (9): Bad file descriptor 00:41:35.965 [2024-12-10 00:23:10.687163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:41:35.965 [2024-12-10 00:23:10.687174] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:41:35.965 [2024-12-10 00:23:10.687181] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:41:35.965 [2024-12-10 00:23:10.687190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:41:35.965 request: 00:41:35.965 { 00:41:35.965 "name": "nvme0", 00:41:35.965 "trtype": "tcp", 00:41:35.965 "traddr": "127.0.0.1", 00:41:35.965 "adrfam": "ipv4", 00:41:35.965 "trsvcid": "4420", 00:41:35.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:35.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:35.965 "prchk_reftag": false, 00:41:35.965 "prchk_guard": false, 00:41:35.965 "hdgst": false, 00:41:35.965 "ddgst": false, 00:41:35.965 "psk": ":spdk-test:key1", 00:41:35.965 "allow_unrecognized_csi": false, 00:41:35.965 "method": "bdev_nvme_attach_controller", 00:41:35.965 "req_id": 1 00:41:35.965 } 00:41:35.965 Got JSON-RPC error response 00:41:35.965 response: 00:41:35.965 { 00:41:35.965 "code": -5, 00:41:35.965 "message": "Input/output error" 00:41:35.965 } 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@33 -- # sn=130379202 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 130379202 00:41:35.965 1 links removed 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@33 -- # sn=592362677 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 592362677 00:41:35.965 1 links removed 00:41:35.965 00:23:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 653400 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 653400 ']' 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 653400 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 653400 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 653400' 00:41:35.965 killing process with pid 653400 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@973 -- # kill 653400 00:41:35.965 Received shutdown signal, test time was about 1.000000 seconds 00:41:35.965 00:41:35.965 Latency(us) 00:41:35.965 [2024-12-09T23:23:10.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:35.965 [2024-12-09T23:23:10.901Z] =================================================================================================================== 00:41:35.965 [2024-12-09T23:23:10.901Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:35.965 00:23:10 keyring_linux -- common/autotest_common.sh@978 -- # wait 653400 00:41:36.225 00:23:10 keyring_linux -- keyring/linux.sh@42 -- # killprocess 653283 00:41:36.225 00:23:10 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 653283 ']' 00:41:36.225 00:23:10 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 653283 00:41:36.225 00:23:10 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:41:36.225 00:23:10 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:36.225 00:23:10 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 653283 00:41:36.225 00:23:10 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:36.225 00:23:10 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:36.225 00:23:10 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 653283' 00:41:36.225 killing process with pid 653283 00:41:36.225 00:23:10 keyring_linux -- common/autotest_common.sh@973 -- # kill 653283 00:41:36.225 00:23:10 keyring_linux -- common/autotest_common.sh@978 -- # wait 653283 00:41:36.484 00:41:36.484 real 0m4.867s 00:41:36.484 user 0m8.913s 00:41:36.484 sys 0m1.478s 00:41:36.484 00:23:11 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:36.484 00:23:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:41:36.484 ************************************ 00:41:36.484 END TEST keyring_linux 00:41:36.484 ************************************ 00:41:36.484 00:23:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:41:36.484 00:23:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:41:36.484 00:23:11 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:41:36.484 00:23:11 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:41:36.484 00:23:11 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:41:36.484 00:23:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:41:36.484 00:23:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:41:36.484 00:23:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:41:36.484 00:23:11 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:41:36.484 00:23:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:41:36.484 00:23:11 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:41:36.484 00:23:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:41:36.484 00:23:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:41:36.484 00:23:11 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:41:36.484 00:23:11 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:41:36.484 00:23:11 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:41:36.484 00:23:11 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:41:36.484 00:23:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:36.484 00:23:11 -- common/autotest_common.sh@10 -- # set +x 00:41:36.484 00:23:11 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:41:36.484 00:23:11 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:41:36.484 00:23:11 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:41:36.484 00:23:11 -- common/autotest_common.sh@10 -- # set +x 00:41:41.756 INFO: APP EXITING 00:41:41.756 INFO: killing all VMs 00:41:41.756 INFO: killing vhost app 00:41:41.756 INFO: EXIT DONE 00:41:44.291 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:41:44.291 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:41:44.291 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:41:44.291 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:41:44.291 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:41:44.291 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:41:44.291 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:41:44.291 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:41:44.291 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:41:44.291 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:41:44.291 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:41:44.551 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:41:44.551 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:41:44.551 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:41:44.551 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:41:44.551 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:41:44.551 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:41:47.840 Cleaning 00:41:47.840 Removing: /var/run/dpdk/spdk0/config 00:41:47.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:41:47.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:41:47.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:41:47.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:41:47.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:41:47.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:41:47.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:41:47.840 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:41:47.840 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:41:47.840 Removing: /var/run/dpdk/spdk0/hugepage_info 00:41:47.840 Removing: /var/run/dpdk/spdk1/config 00:41:47.840 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:41:47.840 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:41:47.840 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:41:47.840 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:41:47.840 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:41:47.840 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:41:47.840 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:41:47.840 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:41:47.840 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:41:47.840 Removing: /var/run/dpdk/spdk1/hugepage_info 00:41:47.840 Removing: /var/run/dpdk/spdk2/config 00:41:47.840 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:41:47.840 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:41:47.840 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:41:47.840 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:41:47.840 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:41:47.840 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:41:47.841 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:41:47.841 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:41:47.841 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:41:47.841 Removing: /var/run/dpdk/spdk2/hugepage_info 00:41:47.841 Removing: /var/run/dpdk/spdk3/config 00:41:47.841 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:41:47.841 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:41:47.841 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:41:47.841 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:41:47.841 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:41:47.841 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:41:47.841 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:41:47.841 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:41:47.841 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:41:47.841 Removing: /var/run/dpdk/spdk3/hugepage_info 00:41:47.841 Removing: /var/run/dpdk/spdk4/config 00:41:47.841 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:41:47.841 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:41:47.841 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:41:47.841 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:41:47.841 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:41:47.841 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:41:47.841 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:41:47.841 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:41:47.841 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:41:47.841 Removing: /var/run/dpdk/spdk4/hugepage_info 00:41:47.841 Removing: /dev/shm/bdev_svc_trace.1 00:41:47.841 Removing: /dev/shm/nvmf_trace.0 00:41:47.841 Removing: /dev/shm/spdk_tgt_trace.pid173530 00:41:47.841 Removing: /var/run/dpdk/spdk0 00:41:47.841 Removing: /var/run/dpdk/spdk1 00:41:47.841 Removing: /var/run/dpdk/spdk2 00:41:47.841 Removing: /var/run/dpdk/spdk3 00:41:47.841 Removing: /var/run/dpdk/spdk4 00:41:47.841 Removing: /var/run/dpdk/spdk_pid171214 00:41:47.841 Removing: /var/run/dpdk/spdk_pid172390 00:41:47.841 Removing: /var/run/dpdk/spdk_pid173530 00:41:47.841 Removing: /var/run/dpdk/spdk_pid174102 00:41:47.841 Removing: /var/run/dpdk/spdk_pid175050 00:41:47.841 Removing: /var/run/dpdk/spdk_pid175282 00:41:47.841 Removing: /var/run/dpdk/spdk_pid176262 00:41:47.841 Removing: /var/run/dpdk/spdk_pid176268 00:41:47.841 Removing: /var/run/dpdk/spdk_pid176620 00:41:47.841 Removing: /var/run/dpdk/spdk_pid178144 00:41:47.841 Removing: /var/run/dpdk/spdk_pid179466 00:41:47.841 Removing: /var/run/dpdk/spdk_pid179912 00:41:47.841 Removing: /var/run/dpdk/spdk_pid180098 00:41:47.841 Removing: /var/run/dpdk/spdk_pid180315 00:41:47.841 Removing: /var/run/dpdk/spdk_pid180603 00:41:47.841 Removing: /var/run/dpdk/spdk_pid180855 00:41:47.841 Removing: /var/run/dpdk/spdk_pid181101 00:41:47.841 Removing: /var/run/dpdk/spdk_pid181389 00:41:47.841 Removing: /var/run/dpdk/spdk_pid182131 00:41:47.841 Removing: /var/run/dpdk/spdk_pid185130 00:41:47.841 Removing: /var/run/dpdk/spdk_pid185386 00:41:47.841 Removing: /var/run/dpdk/spdk_pid185640 00:41:47.841 Removing: /var/run/dpdk/spdk_pid185654 00:41:47.841 Removing: /var/run/dpdk/spdk_pid186144 00:41:47.841 Removing: /var/run/dpdk/spdk_pid186150 00:41:47.841 Removing: /var/run/dpdk/spdk_pid186642 00:41:47.841 Removing: /var/run/dpdk/spdk_pid186650 00:41:47.841 Removing: /var/run/dpdk/spdk_pid187044 00:41:47.841 Removing: /var/run/dpdk/spdk_pid187136 00:41:47.841 Removing: /var/run/dpdk/spdk_pid187394 00:41:47.841 Removing: /var/run/dpdk/spdk_pid187405 00:41:47.841 Removing: /var/run/dpdk/spdk_pid187966 00:41:47.841 Removing: /var/run/dpdk/spdk_pid188165 00:41:47.841 Removing: /var/run/dpdk/spdk_pid188509 00:41:47.841 Removing: /var/run/dpdk/spdk_pid192221 00:41:47.841 Removing: /var/run/dpdk/spdk_pid196491 00:41:47.841 Removing: /var/run/dpdk/spdk_pid206745 00:41:47.841 Removing: /var/run/dpdk/spdk_pid207275 00:41:47.841 Removing: /var/run/dpdk/spdk_pid211606 00:41:47.841 Removing: /var/run/dpdk/spdk_pid212081 00:41:47.841 Removing: /var/run/dpdk/spdk_pid216759 00:41:47.841 Removing: /var/run/dpdk/spdk_pid222684 00:41:47.841 Removing: /var/run/dpdk/spdk_pid225295 00:41:47.841 Removing: /var/run/dpdk/spdk_pid235497 00:41:47.841 Removing: /var/run/dpdk/spdk_pid244453 00:41:47.841 Removing: /var/run/dpdk/spdk_pid246256 00:41:47.841 Removing: /var/run/dpdk/spdk_pid247178 00:41:47.841 Removing: /var/run/dpdk/spdk_pid264771 00:41:47.841 Removing: /var/run/dpdk/spdk_pid268805 00:41:47.841 Removing: /var/run/dpdk/spdk_pid314318 00:41:47.841 Removing: /var/run/dpdk/spdk_pid319548 00:41:47.841 Removing: /var/run/dpdk/spdk_pid325311 00:41:47.841 Removing: /var/run/dpdk/spdk_pid331972 00:41:47.841 Removing: /var/run/dpdk/spdk_pid332054 00:41:47.841 Removing: /var/run/dpdk/spdk_pid332795 00:41:47.841 Removing: /var/run/dpdk/spdk_pid333693 00:41:47.841 Removing: /var/run/dpdk/spdk_pid334617 00:41:47.841 Removing: /var/run/dpdk/spdk_pid335111 00:41:47.841 Removing: /var/run/dpdk/spdk_pid335303 00:41:47.841 Removing: /var/run/dpdk/spdk_pid335549 00:41:47.841 Removing: /var/run/dpdk/spdk_pid335567 00:41:47.841 Removing: /var/run/dpdk/spdk_pid335651 00:41:47.841 Removing: /var/run/dpdk/spdk_pid336490 00:41:47.841 Removing: /var/run/dpdk/spdk_pid337401 00:41:47.841 Removing: /var/run/dpdk/spdk_pid338315 00:41:47.841 Removing: /var/run/dpdk/spdk_pid338781 00:41:47.841 Removing: /var/run/dpdk/spdk_pid338883 00:41:47.841 Removing: /var/run/dpdk/spdk_pid339218 00:41:47.841 Removing: /var/run/dpdk/spdk_pid340257 00:41:47.841 Removing: /var/run/dpdk/spdk_pid341244 00:41:47.841 Removing: /var/run/dpdk/spdk_pid349437 00:41:47.841 Removing: /var/run/dpdk/spdk_pid379172 00:41:47.841 Removing: /var/run/dpdk/spdk_pid383688 00:41:47.841 Removing: /var/run/dpdk/spdk_pid385292 00:41:47.841 Removing: /var/run/dpdk/spdk_pid387118 00:41:47.841 Removing: /var/run/dpdk/spdk_pid387353 00:41:47.841 Removing: /var/run/dpdk/spdk_pid387375 00:41:47.841 Removing: /var/run/dpdk/spdk_pid387651 00:41:47.841 Removing: /var/run/dpdk/spdk_pid388231 00:41:47.841 Removing: /var/run/dpdk/spdk_pid390459 00:41:47.841 Removing: /var/run/dpdk/spdk_pid391231 00:41:47.841 Removing: /var/run/dpdk/spdk_pid391726 00:41:47.841 Removing: /var/run/dpdk/spdk_pid393827 00:41:47.841 Removing: /var/run/dpdk/spdk_pid394320 00:41:47.841 Removing: /var/run/dpdk/spdk_pid395034 00:41:47.841 Removing: /var/run/dpdk/spdk_pid399095 00:41:47.841 Removing: /var/run/dpdk/spdk_pid404699 00:41:47.841 Removing: /var/run/dpdk/spdk_pid404700 00:41:47.841 Removing: /var/run/dpdk/spdk_pid404701 00:41:47.841 Removing: /var/run/dpdk/spdk_pid408568 00:41:47.841 Removing: /var/run/dpdk/spdk_pid417282 00:41:47.841 Removing: /var/run/dpdk/spdk_pid421323 00:41:47.841 Removing: /var/run/dpdk/spdk_pid427307 00:41:47.841 Removing: /var/run/dpdk/spdk_pid428612 00:41:47.841 Removing: /var/run/dpdk/spdk_pid429963 00:41:48.100 Removing: /var/run/dpdk/spdk_pid431507 00:41:48.100 Removing: /var/run/dpdk/spdk_pid436592 00:41:48.100 Removing: /var/run/dpdk/spdk_pid440879 00:41:48.100 Removing: /var/run/dpdk/spdk_pid444846 00:41:48.100 Removing: /var/run/dpdk/spdk_pid452329 00:41:48.100 Removing: /var/run/dpdk/spdk_pid452434 00:41:48.100 Removing: /var/run/dpdk/spdk_pid456929 00:41:48.100 Removing: /var/run/dpdk/spdk_pid457167 00:41:48.100 Removing: /var/run/dpdk/spdk_pid457392 00:41:48.100 Removing: /var/run/dpdk/spdk_pid457852 00:41:48.100 Removing: /var/run/dpdk/spdk_pid457857 00:41:48.100 Removing: /var/run/dpdk/spdk_pid462341 00:41:48.100 Removing: /var/run/dpdk/spdk_pid462907 00:41:48.100 Removing: /var/run/dpdk/spdk_pid467277 00:41:48.100 Removing: /var/run/dpdk/spdk_pid469881 00:41:48.100 Removing: /var/run/dpdk/spdk_pid475226 00:41:48.100 Removing: /var/run/dpdk/spdk_pid480553 00:41:48.100 Removing: /var/run/dpdk/spdk_pid489849 00:41:48.100 Removing: /var/run/dpdk/spdk_pid496999 00:41:48.100 Removing: /var/run/dpdk/spdk_pid497060 00:41:48.100 Removing: /var/run/dpdk/spdk_pid516089 00:41:48.100 Removing: /var/run/dpdk/spdk_pid516569 00:41:48.100 Removing: /var/run/dpdk/spdk_pid517186 00:41:48.100 Removing: /var/run/dpdk/spdk_pid517729 00:41:48.100 Removing: /var/run/dpdk/spdk_pid518434 00:41:48.100 Removing: /var/run/dpdk/spdk_pid518946 00:41:48.100 Removing: /var/run/dpdk/spdk_pid519423 00:41:48.100 Removing: /var/run/dpdk/spdk_pid520108 00:41:48.100 Removing: /var/run/dpdk/spdk_pid524145 00:41:48.100 Removing: /var/run/dpdk/spdk_pid524382 00:41:48.100 Removing: /var/run/dpdk/spdk_pid530502 00:41:48.100 Removing: /var/run/dpdk/spdk_pid530644 00:41:48.100 Removing: /var/run/dpdk/spdk_pid536531 00:41:48.101 Removing: /var/run/dpdk/spdk_pid540756 00:41:48.101 Removing: /var/run/dpdk/spdk_pid550482 00:41:48.101 Removing: /var/run/dpdk/spdk_pid551163 00:41:48.101 Removing: /var/run/dpdk/spdk_pid555217 00:41:48.101 Removing: /var/run/dpdk/spdk_pid555648 00:41:48.101 Removing: /var/run/dpdk/spdk_pid559696 00:41:48.101 Removing: /var/run/dpdk/spdk_pid565398 00:41:48.101 Removing: /var/run/dpdk/spdk_pid567912 00:41:48.101 Removing: /var/run/dpdk/spdk_pid577971 00:41:48.101 Removing: /var/run/dpdk/spdk_pid587185 00:41:48.101 Removing: /var/run/dpdk/spdk_pid588864 00:41:48.101 Removing: /var/run/dpdk/spdk_pid589807 00:41:48.101 Removing: /var/run/dpdk/spdk_pid605945 00:41:48.101 Removing: /var/run/dpdk/spdk_pid609748 00:41:48.101 Removing: /var/run/dpdk/spdk_pid612434 00:41:48.101 Removing: /var/run/dpdk/spdk_pid620393 00:41:48.101 Removing: /var/run/dpdk/spdk_pid620399 00:41:48.101 Removing: /var/run/dpdk/spdk_pid625468 00:41:48.101 Removing: /var/run/dpdk/spdk_pid627921 00:41:48.101 Removing: /var/run/dpdk/spdk_pid629886 00:41:48.101 Removing: /var/run/dpdk/spdk_pid630930 00:41:48.101 Removing: /var/run/dpdk/spdk_pid632906 00:41:48.101 Removing: /var/run/dpdk/spdk_pid634184 00:41:48.101 Removing: /var/run/dpdk/spdk_pid642914 00:41:48.101 Removing: /var/run/dpdk/spdk_pid643372 00:41:48.101 Removing: /var/run/dpdk/spdk_pid643838 00:41:48.101 Removing: /var/run/dpdk/spdk_pid646217 00:41:48.101 Removing: /var/run/dpdk/spdk_pid646773 00:41:48.101 Removing: /var/run/dpdk/spdk_pid647250 00:41:48.101 Removing: /var/run/dpdk/spdk_pid651070 00:41:48.101 Removing: /var/run/dpdk/spdk_pid651213 00:41:48.101 Removing: /var/run/dpdk/spdk_pid652818 00:41:48.101 Removing: /var/run/dpdk/spdk_pid653283 00:41:48.101 Removing: /var/run/dpdk/spdk_pid653400 00:41:48.101 Clean 00:41:48.359 00:23:23 -- common/autotest_common.sh@1453 -- # return 0 00:41:48.359 00:23:23 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:41:48.359 00:23:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:48.359 00:23:23 -- common/autotest_common.sh@10 -- # set +x 00:41:48.359 00:23:23 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:41:48.359 00:23:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:48.359 00:23:23 -- common/autotest_common.sh@10 -- # set +x 00:41:48.359 00:23:23 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt 00:41:48.359 00:23:23 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/udev.log ]] 00:41:48.359 00:23:23 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/udev.log 00:41:48.359 00:23:23 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:41:48.359 00:23:23 -- spdk/autotest.sh@398 -- # hostname 00:41:48.360 00:23:23 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_test.info 00:41:48.618 geninfo: WARNING: invalid characters removed from testname! 00:42:10.552 00:23:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:42:12.457 00:23:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:42:14.363 00:23:49 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:42:16.266 00:23:51 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:42:18.170 00:23:53 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:42:20.074 00:23:54 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:42:21.980 00:23:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:42:21.980 00:23:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:42:21.980 00:23:56 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt ]] 00:42:21.980 00:23:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:42:21.980 00:23:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:42:21.981 00:23:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt 00:42:21.981 + [[ -n 93357 ]] 00:42:21.981 + sudo kill 93357 00:42:21.990 [Pipeline] } 00:42:22.006 [Pipeline] // stage 00:42:22.011 [Pipeline] } 00:42:22.027 [Pipeline] // timeout 00:42:22.032 [Pipeline] } 00:42:22.046 [Pipeline] // catchError 00:42:22.051 [Pipeline] } 00:42:22.065 [Pipeline] // wrap 00:42:22.071 [Pipeline] } 00:42:22.084 [Pipeline] // catchError 00:42:22.092 [Pipeline] stage 00:42:22.094 [Pipeline] { (Epilogue) 00:42:22.106 [Pipeline] catchError 00:42:22.108 [Pipeline] { 00:42:22.118 [Pipeline] echo 00:42:22.120 Cleanup processes 00:42:22.125 [Pipeline] sh 00:42:22.411 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:42:22.411 664109 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:42:22.425 [Pipeline] sh 00:42:22.711 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:42:22.711 ++ grep -v 'sudo pgrep' 00:42:22.711 ++ awk '{print $1}' 00:42:22.711 + sudo kill -9 00:42:22.711 + true 00:42:22.723 [Pipeline] sh 00:42:23.009 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:42:37.904 [Pipeline] sh 00:42:38.189 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:42:38.189 Artifacts sizes are good 00:42:38.203 [Pipeline] archiveArtifacts 00:42:38.215 Archiving artifacts 00:42:38.754 [Pipeline] sh 00:42:39.057 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:42:39.071 [Pipeline] cleanWs 00:42:39.081 [WS-CLEANUP] Deleting project workspace... 00:42:39.081 [WS-CLEANUP] Deferred wipeout is used... 00:42:39.088 [WS-CLEANUP] done 00:42:39.091 [Pipeline] } 00:42:39.108 [Pipeline] // catchError 00:42:39.119 [Pipeline] sh 00:42:39.408 + logger -p user.info -t JENKINS-CI 00:42:39.417 [Pipeline] } 00:42:39.430 [Pipeline] // stage 00:42:39.435 [Pipeline] } 00:42:39.449 [Pipeline] // node 00:42:39.454 [Pipeline] End of Pipeline 00:42:39.495 Finished: SUCCESS